2023-07-21 18:14:16,721 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71 2023-07-21 18:14:16,741 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-21 18:14:16,761 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-21 18:14:16,762 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/cluster_3aa84d3a-0a47-1c83-a7d0-4c79af33c847, deleteOnExit=true 2023-07-21 18:14:16,762 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-21 18:14:16,762 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/test.cache.data in system properties and HBase conf 2023-07-21 18:14:16,763 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/hadoop.tmp.dir in system properties and HBase conf 2023-07-21 18:14:16,764 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/hadoop.log.dir in system properties and HBase conf 2023-07-21 18:14:16,764 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-21 18:14:16,765 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-21 18:14:16,765 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-21 18:14:16,895 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-21 18:14:17,285 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-21 18:14:17,290 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-21 18:14:17,291 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-21 18:14:17,291 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-21 18:14:17,292 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 18:14:17,292 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-21 18:14:17,292 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-21 18:14:17,293 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 18:14:17,293 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 18:14:17,294 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-21 18:14:17,294 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/nfs.dump.dir in system properties and HBase conf 2023-07-21 18:14:17,295 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/java.io.tmpdir in system properties and HBase conf 2023-07-21 18:14:17,295 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 18:14:17,295 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-21 18:14:17,296 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-21 18:14:17,836 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 18:14:17,841 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 18:14:18,133 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-21 18:14:18,303 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-21 18:14:18,323 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 18:14:18,358 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 18:14:18,391 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/java.io.tmpdir/Jetty_localhost_42099_hdfs____yo45a4/webapp 2023-07-21 18:14:18,549 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42099 2023-07-21 18:14:18,589 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 18:14:18,590 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 18:14:18,985 WARN [Listener at localhost/37139] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 18:14:19,046 WARN [Listener at localhost/37139] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 18:14:19,068 WARN [Listener at localhost/37139] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 18:14:19,076 INFO [Listener at localhost/37139] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 18:14:19,096 INFO [Listener at localhost/37139] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/java.io.tmpdir/Jetty_localhost_33661_datanode____.b109zc/webapp 2023-07-21 18:14:19,216 INFO [Listener at localhost/37139] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33661 2023-07-21 18:14:19,657 WARN [Listener at localhost/39895] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 18:14:19,699 WARN [Listener at localhost/39895] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 18:14:19,703 WARN [Listener at localhost/39895] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 18:14:19,705 INFO [Listener at localhost/39895] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 18:14:19,712 INFO [Listener at localhost/39895] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/java.io.tmpdir/Jetty_localhost_36725_datanode____2lx5ni/webapp 2023-07-21 18:14:19,852 INFO [Listener at localhost/39895] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36725 2023-07-21 18:14:19,873 WARN [Listener at localhost/45527] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 18:14:19,937 WARN [Listener at localhost/45527] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 18:14:19,950 WARN [Listener at localhost/45527] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 18:14:19,965 INFO [Listener at localhost/45527] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 18:14:19,973 INFO [Listener at localhost/45527] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/java.io.tmpdir/Jetty_localhost_44087_datanode____kknguu/webapp 2023-07-21 18:14:20,160 INFO [Listener at localhost/45527] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44087 2023-07-21 18:14:20,185 WARN [Listener at localhost/36435] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 18:14:20,369 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x876b4f04bfe8fbad: Processing first storage report for DS-c8b5d9c7-8b97-4bf3-be00-3803219852c7 from datanode e85904a5-7e61-4aaa-8f5c-ce8327889bbf 2023-07-21 18:14:20,371 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x876b4f04bfe8fbad: from storage DS-c8b5d9c7-8b97-4bf3-be00-3803219852c7 node DatanodeRegistration(127.0.0.1:35467, datanodeUuid=e85904a5-7e61-4aaa-8f5c-ce8327889bbf, infoPort=40645, infoSecurePort=0, ipcPort=45527, storageInfo=lv=-57;cid=testClusterID;nsid=1636276323;c=1689963257910), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-21 18:14:20,371 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf9f8ccef1f239479: Processing first storage report for DS-86b9fc6a-e29a-4a47-b59b-dae87071f69c from datanode 9db80221-e53d-40d5-bdb6-5e9a8daaef4e 2023-07-21 18:14:20,371 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf9f8ccef1f239479: from storage DS-86b9fc6a-e29a-4a47-b59b-dae87071f69c node DatanodeRegistration(127.0.0.1:33205, datanodeUuid=9db80221-e53d-40d5-bdb6-5e9a8daaef4e, infoPort=33557, infoSecurePort=0, ipcPort=39895, storageInfo=lv=-57;cid=testClusterID;nsid=1636276323;c=1689963257910), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 18:14:20,371 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x876b4f04bfe8fbad: Processing first storage report for DS-50290ac9-618e-4ed2-a6ed-621a360e4d42 from datanode e85904a5-7e61-4aaa-8f5c-ce8327889bbf 2023-07-21 18:14:20,371 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x876b4f04bfe8fbad: from storage DS-50290ac9-618e-4ed2-a6ed-621a360e4d42 node DatanodeRegistration(127.0.0.1:35467, datanodeUuid=e85904a5-7e61-4aaa-8f5c-ce8327889bbf, infoPort=40645, infoSecurePort=0, ipcPort=45527, storageInfo=lv=-57;cid=testClusterID;nsid=1636276323;c=1689963257910), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 18:14:20,372 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf9f8ccef1f239479: Processing first storage report for DS-fa687924-bc3e-4253-8dd1-b9be370d9f6a from datanode 9db80221-e53d-40d5-bdb6-5e9a8daaef4e 2023-07-21 18:14:20,373 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf9f8ccef1f239479: from storage DS-fa687924-bc3e-4253-8dd1-b9be370d9f6a node DatanodeRegistration(127.0.0.1:33205, datanodeUuid=9db80221-e53d-40d5-bdb6-5e9a8daaef4e, infoPort=33557, infoSecurePort=0, ipcPort=39895, storageInfo=lv=-57;cid=testClusterID;nsid=1636276323;c=1689963257910), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-21 18:14:20,381 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xceb15e227fe986d: Processing first storage report for DS-88c8b8eb-df08-4ca1-8ed0-e6d154d2ee62 from datanode 5df3cda6-1f99-4214-8482-8fc8dc8e8351 2023-07-21 18:14:20,381 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xceb15e227fe986d: from storage DS-88c8b8eb-df08-4ca1-8ed0-e6d154d2ee62 node DatanodeRegistration(127.0.0.1:43391, datanodeUuid=5df3cda6-1f99-4214-8482-8fc8dc8e8351, infoPort=42579, infoSecurePort=0, ipcPort=36435, storageInfo=lv=-57;cid=testClusterID;nsid=1636276323;c=1689963257910), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 18:14:20,381 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xceb15e227fe986d: Processing first storage report for DS-78cc49e3-e2e2-443d-8985-8c8422f40b8a from datanode 5df3cda6-1f99-4214-8482-8fc8dc8e8351 2023-07-21 18:14:20,381 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xceb15e227fe986d: from storage DS-78cc49e3-e2e2-443d-8985-8c8422f40b8a node DatanodeRegistration(127.0.0.1:43391, datanodeUuid=5df3cda6-1f99-4214-8482-8fc8dc8e8351, infoPort=42579, infoSecurePort=0, ipcPort=36435, storageInfo=lv=-57;cid=testClusterID;nsid=1636276323;c=1689963257910), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 18:14:20,641 DEBUG [Listener at localhost/36435] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71 2023-07-21 18:14:20,712 INFO [Listener at localhost/36435] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/cluster_3aa84d3a-0a47-1c83-a7d0-4c79af33c847/zookeeper_0, clientPort=64847, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/cluster_3aa84d3a-0a47-1c83-a7d0-4c79af33c847/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/cluster_3aa84d3a-0a47-1c83-a7d0-4c79af33c847/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-21 18:14:20,727 INFO [Listener at localhost/36435] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=64847 2023-07-21 18:14:20,737 INFO [Listener at localhost/36435] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:20,740 INFO [Listener at localhost/36435] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:21,410 INFO [Listener at localhost/36435] util.FSUtils(471): Created version file at hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966 with version=8 2023-07-21 18:14:21,411 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/hbase-staging 2023-07-21 18:14:21,419 DEBUG [Listener at localhost/36435] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-21 18:14:21,419 DEBUG [Listener at localhost/36435] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-21 18:14:21,419 DEBUG [Listener at localhost/36435] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-21 18:14:21,419 DEBUG [Listener at localhost/36435] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-21 18:14:21,781 INFO [Listener at localhost/36435] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-21 18:14:22,306 INFO [Listener at localhost/36435] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 18:14:22,351 INFO [Listener at localhost/36435] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:22,351 INFO [Listener at localhost/36435] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:22,352 INFO [Listener at localhost/36435] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 18:14:22,352 INFO [Listener at localhost/36435] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:22,352 INFO [Listener at localhost/36435] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 18:14:22,507 INFO [Listener at localhost/36435] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 18:14:22,587 DEBUG [Listener at localhost/36435] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-21 18:14:22,684 INFO [Listener at localhost/36435] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45593 2023-07-21 18:14:22,695 INFO [Listener at localhost/36435] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:22,697 INFO [Listener at localhost/36435] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:22,718 INFO [Listener at localhost/36435] zookeeper.RecoverableZooKeeper(93): Process identifier=master:45593 connecting to ZooKeeper ensemble=127.0.0.1:64847 2023-07-21 18:14:22,769 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:455930x0, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 18:14:22,779 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:45593-0x10189176e190000 connected 2023-07-21 18:14:22,806 DEBUG [Listener at localhost/36435] zookeeper.ZKUtil(164): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 18:14:22,806 DEBUG [Listener at localhost/36435] zookeeper.ZKUtil(164): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 18:14:22,810 DEBUG [Listener at localhost/36435] zookeeper.ZKUtil(164): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 18:14:22,819 DEBUG [Listener at localhost/36435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45593 2023-07-21 18:14:22,819 DEBUG [Listener at localhost/36435] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45593 2023-07-21 18:14:22,820 DEBUG [Listener at localhost/36435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45593 2023-07-21 18:14:22,821 DEBUG [Listener at localhost/36435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45593 2023-07-21 18:14:22,821 DEBUG [Listener at localhost/36435] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45593 2023-07-21 18:14:22,855 INFO [Listener at localhost/36435] log.Log(170): Logging initialized @6989ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-21 18:14:22,983 INFO [Listener at localhost/36435] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 18:14:22,984 INFO [Listener at localhost/36435] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 18:14:22,985 INFO [Listener at localhost/36435] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 18:14:22,987 INFO [Listener at localhost/36435] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-21 18:14:22,987 INFO [Listener at localhost/36435] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 18:14:22,987 INFO [Listener at localhost/36435] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 18:14:22,991 INFO [Listener at localhost/36435] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 18:14:23,054 INFO [Listener at localhost/36435] http.HttpServer(1146): Jetty bound to port 37861 2023-07-21 18:14:23,056 INFO [Listener at localhost/36435] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 18:14:23,091 INFO [Listener at localhost/36435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:23,094 INFO [Listener at localhost/36435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5ae16b10{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/hadoop.log.dir/,AVAILABLE} 2023-07-21 18:14:23,095 INFO [Listener at localhost/36435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:23,095 INFO [Listener at localhost/36435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@ed7d79c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 18:14:23,280 INFO [Listener at localhost/36435] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 18:14:23,292 INFO [Listener at localhost/36435] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 18:14:23,292 INFO [Listener at localhost/36435] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 18:14:23,294 INFO [Listener at localhost/36435] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 18:14:23,301 INFO [Listener at localhost/36435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:23,331 INFO [Listener at localhost/36435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@13466a5d{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/java.io.tmpdir/jetty-0_0_0_0-37861-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6118401510575975143/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 18:14:23,346 INFO [Listener at localhost/36435] server.AbstractConnector(333): Started ServerConnector@61a45427{HTTP/1.1, (http/1.1)}{0.0.0.0:37861} 2023-07-21 18:14:23,346 INFO [Listener at localhost/36435] server.Server(415): Started @7480ms 2023-07-21 18:14:23,350 INFO [Listener at localhost/36435] master.HMaster(444): hbase.rootdir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966, hbase.cluster.distributed=false 2023-07-21 18:14:23,426 INFO [Listener at localhost/36435] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 18:14:23,427 INFO [Listener at localhost/36435] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:23,427 INFO [Listener at localhost/36435] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:23,427 INFO [Listener at localhost/36435] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 18:14:23,427 INFO [Listener at localhost/36435] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:23,428 INFO [Listener at localhost/36435] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 18:14:23,434 INFO [Listener at localhost/36435] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 18:14:23,438 INFO [Listener at localhost/36435] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43419 2023-07-21 18:14:23,441 INFO [Listener at localhost/36435] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 18:14:23,452 DEBUG [Listener at localhost/36435] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 18:14:23,453 INFO [Listener at localhost/36435] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:23,456 INFO [Listener at localhost/36435] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:23,459 INFO [Listener at localhost/36435] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43419 connecting to ZooKeeper ensemble=127.0.0.1:64847 2023-07-21 18:14:23,466 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:434190x0, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 18:14:23,467 DEBUG [Listener at localhost/36435] zookeeper.ZKUtil(164): regionserver:434190x0, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 18:14:23,469 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43419-0x10189176e190001 connected 2023-07-21 18:14:23,469 DEBUG [Listener at localhost/36435] zookeeper.ZKUtil(164): regionserver:43419-0x10189176e190001, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 18:14:23,470 DEBUG [Listener at localhost/36435] zookeeper.ZKUtil(164): regionserver:43419-0x10189176e190001, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 18:14:23,491 DEBUG [Listener at localhost/36435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43419 2023-07-21 18:14:23,505 DEBUG [Listener at localhost/36435] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43419 2023-07-21 18:14:23,510 DEBUG [Listener at localhost/36435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43419 2023-07-21 18:14:23,512 DEBUG [Listener at localhost/36435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43419 2023-07-21 18:14:23,512 DEBUG [Listener at localhost/36435] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43419 2023-07-21 18:14:23,516 INFO [Listener at localhost/36435] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 18:14:23,516 INFO [Listener at localhost/36435] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 18:14:23,517 INFO [Listener at localhost/36435] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 18:14:23,518 INFO [Listener at localhost/36435] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 18:14:23,518 INFO [Listener at localhost/36435] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 18:14:23,518 INFO [Listener at localhost/36435] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 18:14:23,519 INFO [Listener at localhost/36435] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 18:14:23,521 INFO [Listener at localhost/36435] http.HttpServer(1146): Jetty bound to port 33183 2023-07-21 18:14:23,521 INFO [Listener at localhost/36435] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 18:14:23,523 INFO [Listener at localhost/36435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:23,523 INFO [Listener at localhost/36435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1af304e9{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/hadoop.log.dir/,AVAILABLE} 2023-07-21 18:14:23,524 INFO [Listener at localhost/36435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:23,524 INFO [Listener at localhost/36435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1dd1c21b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 18:14:23,681 INFO [Listener at localhost/36435] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 18:14:23,684 INFO [Listener at localhost/36435] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 18:14:23,684 INFO [Listener at localhost/36435] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 18:14:23,685 INFO [Listener at localhost/36435] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 18:14:23,689 INFO [Listener at localhost/36435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:23,694 INFO [Listener at localhost/36435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@41ccb499{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/java.io.tmpdir/jetty-0_0_0_0-33183-hbase-server-2_4_18-SNAPSHOT_jar-_-any-583914410849180936/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 18:14:23,696 INFO [Listener at localhost/36435] server.AbstractConnector(333): Started ServerConnector@2f3b3e4c{HTTP/1.1, (http/1.1)}{0.0.0.0:33183} 2023-07-21 18:14:23,696 INFO [Listener at localhost/36435] server.Server(415): Started @7830ms 2023-07-21 18:14:23,716 INFO [Listener at localhost/36435] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 18:14:23,716 INFO [Listener at localhost/36435] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:23,717 INFO [Listener at localhost/36435] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:23,717 INFO [Listener at localhost/36435] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 18:14:23,717 INFO [Listener at localhost/36435] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:23,718 INFO [Listener at localhost/36435] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 18:14:23,718 INFO [Listener at localhost/36435] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 18:14:23,721 INFO [Listener at localhost/36435] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46437 2023-07-21 18:14:23,721 INFO [Listener at localhost/36435] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 18:14:23,723 DEBUG [Listener at localhost/36435] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 18:14:23,723 INFO [Listener at localhost/36435] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:23,725 INFO [Listener at localhost/36435] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:23,727 INFO [Listener at localhost/36435] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46437 connecting to ZooKeeper ensemble=127.0.0.1:64847 2023-07-21 18:14:23,737 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:464370x0, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 18:14:23,739 DEBUG [Listener at localhost/36435] zookeeper.ZKUtil(164): regionserver:464370x0, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 18:14:23,748 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46437-0x10189176e190002 connected 2023-07-21 18:14:23,749 DEBUG [Listener at localhost/36435] zookeeper.ZKUtil(164): regionserver:46437-0x10189176e190002, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 18:14:23,750 DEBUG [Listener at localhost/36435] zookeeper.ZKUtil(164): regionserver:46437-0x10189176e190002, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 18:14:23,754 DEBUG [Listener at localhost/36435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46437 2023-07-21 18:14:23,755 DEBUG [Listener at localhost/36435] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46437 2023-07-21 18:14:23,757 DEBUG [Listener at localhost/36435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46437 2023-07-21 18:14:23,761 DEBUG [Listener at localhost/36435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46437 2023-07-21 18:14:23,762 DEBUG [Listener at localhost/36435] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46437 2023-07-21 18:14:23,765 INFO [Listener at localhost/36435] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 18:14:23,766 INFO [Listener at localhost/36435] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 18:14:23,766 INFO [Listener at localhost/36435] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 18:14:23,766 INFO [Listener at localhost/36435] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 18:14:23,766 INFO [Listener at localhost/36435] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 18:14:23,767 INFO [Listener at localhost/36435] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 18:14:23,767 INFO [Listener at localhost/36435] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 18:14:23,767 INFO [Listener at localhost/36435] http.HttpServer(1146): Jetty bound to port 33053 2023-07-21 18:14:23,768 INFO [Listener at localhost/36435] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 18:14:23,780 INFO [Listener at localhost/36435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:23,781 INFO [Listener at localhost/36435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@10a468c0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/hadoop.log.dir/,AVAILABLE} 2023-07-21 18:14:23,782 INFO [Listener at localhost/36435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:23,782 INFO [Listener at localhost/36435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4eb867fa{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 18:14:23,925 INFO [Listener at localhost/36435] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 18:14:23,926 INFO [Listener at localhost/36435] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 18:14:23,927 INFO [Listener at localhost/36435] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 18:14:23,927 INFO [Listener at localhost/36435] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 18:14:23,928 INFO [Listener at localhost/36435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:23,929 INFO [Listener at localhost/36435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1d5b0a74{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/java.io.tmpdir/jetty-0_0_0_0-33053-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2072493398931689459/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 18:14:23,930 INFO [Listener at localhost/36435] server.AbstractConnector(333): Started ServerConnector@34693676{HTTP/1.1, (http/1.1)}{0.0.0.0:33053} 2023-07-21 18:14:23,930 INFO [Listener at localhost/36435] server.Server(415): Started @8064ms 2023-07-21 18:14:23,943 INFO [Listener at localhost/36435] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 18:14:23,943 INFO [Listener at localhost/36435] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:23,943 INFO [Listener at localhost/36435] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:23,943 INFO [Listener at localhost/36435] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 18:14:23,943 INFO [Listener at localhost/36435] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:23,943 INFO [Listener at localhost/36435] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 18:14:23,943 INFO [Listener at localhost/36435] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 18:14:23,945 INFO [Listener at localhost/36435] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44049 2023-07-21 18:14:23,945 INFO [Listener at localhost/36435] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 18:14:23,948 DEBUG [Listener at localhost/36435] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 18:14:23,949 INFO [Listener at localhost/36435] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:23,950 INFO [Listener at localhost/36435] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:23,951 INFO [Listener at localhost/36435] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44049 connecting to ZooKeeper ensemble=127.0.0.1:64847 2023-07-21 18:14:23,955 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:440490x0, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 18:14:23,956 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44049-0x10189176e190003 connected 2023-07-21 18:14:23,956 DEBUG [Listener at localhost/36435] zookeeper.ZKUtil(164): regionserver:44049-0x10189176e190003, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 18:14:23,957 DEBUG [Listener at localhost/36435] zookeeper.ZKUtil(164): regionserver:44049-0x10189176e190003, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 18:14:23,958 DEBUG [Listener at localhost/36435] zookeeper.ZKUtil(164): regionserver:44049-0x10189176e190003, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 18:14:23,958 DEBUG [Listener at localhost/36435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44049 2023-07-21 18:14:23,958 DEBUG [Listener at localhost/36435] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44049 2023-07-21 18:14:23,960 DEBUG [Listener at localhost/36435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44049 2023-07-21 18:14:23,960 DEBUG [Listener at localhost/36435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44049 2023-07-21 18:14:23,961 DEBUG [Listener at localhost/36435] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44049 2023-07-21 18:14:23,963 INFO [Listener at localhost/36435] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 18:14:23,964 INFO [Listener at localhost/36435] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 18:14:23,964 INFO [Listener at localhost/36435] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 18:14:23,964 INFO [Listener at localhost/36435] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 18:14:23,964 INFO [Listener at localhost/36435] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 18:14:23,964 INFO [Listener at localhost/36435] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 18:14:23,965 INFO [Listener at localhost/36435] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 18:14:23,966 INFO [Listener at localhost/36435] http.HttpServer(1146): Jetty bound to port 37971 2023-07-21 18:14:23,966 INFO [Listener at localhost/36435] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 18:14:23,968 INFO [Listener at localhost/36435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:23,969 INFO [Listener at localhost/36435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6ca31f38{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/hadoop.log.dir/,AVAILABLE} 2023-07-21 18:14:23,969 INFO [Listener at localhost/36435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:23,969 INFO [Listener at localhost/36435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4cd52d74{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 18:14:24,111 INFO [Listener at localhost/36435] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 18:14:24,113 INFO [Listener at localhost/36435] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 18:14:24,113 INFO [Listener at localhost/36435] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 18:14:24,113 INFO [Listener at localhost/36435] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 18:14:24,115 INFO [Listener at localhost/36435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:24,116 INFO [Listener at localhost/36435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@108a780c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/java.io.tmpdir/jetty-0_0_0_0-37971-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5257418339957089817/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 18:14:24,118 INFO [Listener at localhost/36435] server.AbstractConnector(333): Started ServerConnector@59373ab8{HTTP/1.1, (http/1.1)}{0.0.0.0:37971} 2023-07-21 18:14:24,118 INFO [Listener at localhost/36435] server.Server(415): Started @8252ms 2023-07-21 18:14:24,124 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 18:14:24,137 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@1bef6a3b{HTTP/1.1, (http/1.1)}{0.0.0.0:35727} 2023-07-21 18:14:24,137 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8271ms 2023-07-21 18:14:24,137 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,45593,1689963261589 2023-07-21 18:14:24,148 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 18:14:24,149 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,45593,1689963261589 2023-07-21 18:14:24,170 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:46437-0x10189176e190002, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 18:14:24,170 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:43419-0x10189176e190001, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 18:14:24,170 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 18:14:24,171 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:14:24,170 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:44049-0x10189176e190003, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 18:14:24,172 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 18:14:24,174 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,45593,1689963261589 from backup master directory 2023-07-21 18:14:24,174 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 18:14:24,178 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,45593,1689963261589 2023-07-21 18:14:24,179 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 18:14:24,179 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 18:14:24,179 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,45593,1689963261589 2023-07-21 18:14:24,183 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-21 18:14:24,184 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-21 18:14:24,284 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/hbase.id with ID: addc2e3b-8a1c-427b-bbf5-08971cf01958 2023-07-21 18:14:24,326 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:24,343 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:14:24,404 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x08bf68d6 to 127.0.0.1:64847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 18:14:24,431 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@784f0472, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 18:14:24,457 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 18:14:24,459 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-21 18:14:24,480 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-21 18:14:24,480 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-21 18:14:24,482 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-21 18:14:24,486 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-21 18:14:24,487 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 18:14:24,526 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/MasterData/data/master/store-tmp 2023-07-21 18:14:24,568 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:24,569 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 18:14:24,569 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 18:14:24,569 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 18:14:24,569 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 18:14:24,569 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 18:14:24,569 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 18:14:24,569 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 18:14:24,570 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/MasterData/WALs/jenkins-hbase4.apache.org,45593,1689963261589 2023-07-21 18:14:24,595 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45593%2C1689963261589, suffix=, logDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/MasterData/WALs/jenkins-hbase4.apache.org,45593,1689963261589, archiveDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/MasterData/oldWALs, maxLogs=10 2023-07-21 18:14:24,649 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33205,DS-86b9fc6a-e29a-4a47-b59b-dae87071f69c,DISK] 2023-07-21 18:14:24,649 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43391,DS-88c8b8eb-df08-4ca1-8ed0-e6d154d2ee62,DISK] 2023-07-21 18:14:24,649 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35467,DS-c8b5d9c7-8b97-4bf3-be00-3803219852c7,DISK] 2023-07-21 18:14:24,658 DEBUG [RS-EventLoopGroup-5-1] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 18:14:24,747 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/MasterData/WALs/jenkins-hbase4.apache.org,45593,1689963261589/jenkins-hbase4.apache.org%2C45593%2C1689963261589.1689963264605 2023-07-21 18:14:24,748 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33205,DS-86b9fc6a-e29a-4a47-b59b-dae87071f69c,DISK], DatanodeInfoWithStorage[127.0.0.1:43391,DS-88c8b8eb-df08-4ca1-8ed0-e6d154d2ee62,DISK], DatanodeInfoWithStorage[127.0.0.1:35467,DS-c8b5d9c7-8b97-4bf3-be00-3803219852c7,DISK]] 2023-07-21 18:14:24,749 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:14:24,750 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:24,753 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 18:14:24,754 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 18:14:24,817 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-21 18:14:24,824 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-21 18:14:24,855 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-21 18:14:24,870 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:24,875 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 18:14:24,877 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 18:14:24,898 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 18:14:24,903 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:24,904 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10749829920, jitterRate=0.0011559277772903442}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:24,904 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 18:14:24,905 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-21 18:14:24,928 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-21 18:14:24,928 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-21 18:14:24,932 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-21 18:14:24,935 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 2 msec 2023-07-21 18:14:24,987 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 51 msec 2023-07-21 18:14:24,988 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-21 18:14:25,026 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-21 18:14:25,032 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-21 18:14:25,039 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-21 18:14:25,045 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-21 18:14:25,050 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-21 18:14:25,053 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:14:25,054 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-21 18:14:25,055 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-21 18:14:25,068 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-21 18:14:25,073 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:46437-0x10189176e190002, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 18:14:25,073 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:44049-0x10189176e190003, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 18:14:25,073 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:43419-0x10189176e190001, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 18:14:25,073 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 18:14:25,073 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:14:25,074 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,45593,1689963261589, sessionid=0x10189176e190000, setting cluster-up flag (Was=false) 2023-07-21 18:14:25,091 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:14:25,098 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-21 18:14:25,099 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,45593,1689963261589 2023-07-21 18:14:25,105 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:14:25,113 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-21 18:14:25,115 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,45593,1689963261589 2023-07-21 18:14:25,117 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.hbase-snapshot/.tmp 2023-07-21 18:14:25,123 INFO [RS:0;jenkins-hbase4:43419] regionserver.HRegionServer(951): ClusterId : addc2e3b-8a1c-427b-bbf5-08971cf01958 2023-07-21 18:14:25,123 INFO [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(951): ClusterId : addc2e3b-8a1c-427b-bbf5-08971cf01958 2023-07-21 18:14:25,124 INFO [RS:2;jenkins-hbase4:44049] regionserver.HRegionServer(951): ClusterId : addc2e3b-8a1c-427b-bbf5-08971cf01958 2023-07-21 18:14:25,130 DEBUG [RS:0;jenkins-hbase4:43419] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 18:14:25,130 DEBUG [RS:2;jenkins-hbase4:44049] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 18:14:25,130 DEBUG [RS:1;jenkins-hbase4:46437] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 18:14:25,137 DEBUG [RS:0;jenkins-hbase4:43419] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 18:14:25,137 DEBUG [RS:1;jenkins-hbase4:46437] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 18:14:25,137 DEBUG [RS:2;jenkins-hbase4:44049] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 18:14:25,137 DEBUG [RS:1;jenkins-hbase4:46437] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 18:14:25,137 DEBUG [RS:0;jenkins-hbase4:43419] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 18:14:25,137 DEBUG [RS:2;jenkins-hbase4:44049] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 18:14:25,142 DEBUG [RS:2;jenkins-hbase4:44049] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 18:14:25,142 DEBUG [RS:0;jenkins-hbase4:43419] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 18:14:25,142 DEBUG [RS:1;jenkins-hbase4:46437] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 18:14:25,144 DEBUG [RS:2;jenkins-hbase4:44049] zookeeper.ReadOnlyZKClient(139): Connect 0x7edc3cc7 to 127.0.0.1:64847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 18:14:25,144 DEBUG [RS:0;jenkins-hbase4:43419] zookeeper.ReadOnlyZKClient(139): Connect 0x6a0fc3fa to 127.0.0.1:64847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 18:14:25,144 DEBUG [RS:1;jenkins-hbase4:46437] zookeeper.ReadOnlyZKClient(139): Connect 0x00574c1a to 127.0.0.1:64847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 18:14:25,155 DEBUG [RS:1;jenkins-hbase4:46437] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4555d274, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 18:14:25,155 DEBUG [RS:0;jenkins-hbase4:43419] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5c211cd8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 18:14:25,155 DEBUG [RS:1;jenkins-hbase4:46437] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@36b229c3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 18:14:25,155 DEBUG [RS:0;jenkins-hbase4:43419] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7c7ce05f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 18:14:25,159 DEBUG [RS:2;jenkins-hbase4:44049] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3bdebd78, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 18:14:25,159 DEBUG [RS:2;jenkins-hbase4:44049] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6d91cc5b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 18:14:25,188 DEBUG [RS:1;jenkins-hbase4:46437] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:46437 2023-07-21 18:14:25,188 DEBUG [RS:2;jenkins-hbase4:44049] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:44049 2023-07-21 18:14:25,190 DEBUG [RS:0;jenkins-hbase4:43419] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:43419 2023-07-21 18:14:25,195 INFO [RS:1;jenkins-hbase4:46437] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 18:14:25,195 INFO [RS:0;jenkins-hbase4:43419] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 18:14:25,196 INFO [RS:0;jenkins-hbase4:43419] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 18:14:25,195 INFO [RS:2;jenkins-hbase4:44049] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 18:14:25,196 INFO [RS:2;jenkins-hbase4:44049] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 18:14:25,196 DEBUG [RS:0;jenkins-hbase4:43419] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 18:14:25,195 INFO [RS:1;jenkins-hbase4:46437] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 18:14:25,196 DEBUG [RS:2;jenkins-hbase4:44049] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 18:14:25,196 DEBUG [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 18:14:25,200 INFO [RS:2;jenkins-hbase4:44049] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45593,1689963261589 with isa=jenkins-hbase4.apache.org/172.31.14.131:44049, startcode=1689963263942 2023-07-21 18:14:25,200 INFO [RS:0;jenkins-hbase4:43419] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45593,1689963261589 with isa=jenkins-hbase4.apache.org/172.31.14.131:43419, startcode=1689963263425 2023-07-21 18:14:25,200 INFO [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45593,1689963261589 with isa=jenkins-hbase4.apache.org/172.31.14.131:46437, startcode=1689963263715 2023-07-21 18:14:25,223 DEBUG [RS:2;jenkins-hbase4:44049] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 18:14:25,223 DEBUG [RS:0;jenkins-hbase4:43419] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 18:14:25,223 DEBUG [RS:1;jenkins-hbase4:46437] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 18:14:25,231 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-21 18:14:25,246 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-21 18:14:25,249 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45593,1689963261589] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 18:14:25,251 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-21 18:14:25,251 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-21 18:14:25,291 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35041, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 18:14:25,291 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38151, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 18:14:25,291 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39859, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 18:14:25,301 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:25,315 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:25,316 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:25,337 DEBUG [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 18:14:25,337 DEBUG [RS:0;jenkins-hbase4:43419] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 18:14:25,337 WARN [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 18:14:25,337 DEBUG [RS:2;jenkins-hbase4:44049] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 18:14:25,337 WARN [RS:0;jenkins-hbase4:43419] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 18:14:25,337 WARN [RS:2;jenkins-hbase4:44049] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 18:14:25,355 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-21 18:14:25,402 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 18:14:25,409 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 18:14:25,410 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 18:14:25,410 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 18:14:25,412 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 18:14:25,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 18:14:25,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 18:14:25,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 18:14:25,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-21 18:14:25,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 18:14:25,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,415 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689963295414 2023-07-21 18:14:25,417 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-21 18:14:25,423 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-21 18:14:25,426 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 18:14:25,428 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-21 18:14:25,431 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 18:14:25,432 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-21 18:14:25,432 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-21 18:14:25,433 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-21 18:14:25,433 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-21 18:14:25,433 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:25,438 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-21 18:14:25,438 INFO [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45593,1689963261589 with isa=jenkins-hbase4.apache.org/172.31.14.131:46437, startcode=1689963263715 2023-07-21 18:14:25,439 INFO [RS:2;jenkins-hbase4:44049] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45593,1689963261589 with isa=jenkins-hbase4.apache.org/172.31.14.131:44049, startcode=1689963263942 2023-07-21 18:14:25,438 INFO [RS:0;jenkins-hbase4:43419] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45593,1689963261589 with isa=jenkins-hbase4.apache.org/172.31.14.131:43419, startcode=1689963263425 2023-07-21 18:14:25,440 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:25,441 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:25,441 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-21 18:14:25,442 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:25,444 DEBUG [RS:0;jenkins-hbase4:43419] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 18:14:25,443 DEBUG [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 18:14:25,444 WARN [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 200 ms and then retrying. 2023-07-21 18:14:25,443 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-21 18:14:25,444 WARN [RS:0;jenkins-hbase4:43419] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 200 ms and then retrying. 2023-07-21 18:14:25,446 DEBUG [RS:2;jenkins-hbase4:44049] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 18:14:25,446 WARN [RS:2;jenkins-hbase4:44049] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 200 ms and then retrying. 2023-07-21 18:14:25,450 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-21 18:14:25,451 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-21 18:14:25,454 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689963265453,5,FailOnTimeoutGroup] 2023-07-21 18:14:25,455 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689963265455,5,FailOnTimeoutGroup] 2023-07-21 18:14:25,455 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:25,455 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-21 18:14:25,456 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:25,456 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:25,517 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 18:14:25,518 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 18:14:25,518 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966 2023-07-21 18:14:25,550 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:25,553 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 18:14:25,556 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/info 2023-07-21 18:14:25,557 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 18:14:25,557 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:25,558 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 18:14:25,560 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/rep_barrier 2023-07-21 18:14:25,561 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 18:14:25,562 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:25,562 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 18:14:25,564 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/table 2023-07-21 18:14:25,565 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 18:14:25,566 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:25,567 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740 2023-07-21 18:14:25,569 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740 2023-07-21 18:14:25,573 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 18:14:25,575 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 18:14:25,583 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:25,584 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9627189280, jitterRate=-0.10339812934398651}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 18:14:25,584 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 18:14:25,584 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 18:14:25,584 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 18:14:25,584 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 18:14:25,584 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 18:14:25,584 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 18:14:25,585 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 18:14:25,585 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 18:14:25,592 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 18:14:25,592 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-21 18:14:25,603 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-21 18:14:25,618 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-21 18:14:25,622 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-21 18:14:25,645 INFO [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45593,1689963261589 with isa=jenkins-hbase4.apache.org/172.31.14.131:46437, startcode=1689963263715 2023-07-21 18:14:25,645 INFO [RS:0;jenkins-hbase4:43419] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45593,1689963261589 with isa=jenkins-hbase4.apache.org/172.31.14.131:43419, startcode=1689963263425 2023-07-21 18:14:25,647 INFO [RS:2;jenkins-hbase4:44049] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45593,1689963261589 with isa=jenkins-hbase4.apache.org/172.31.14.131:44049, startcode=1689963263942 2023-07-21 18:14:25,652 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45593] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:25,653 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45593,1689963261589] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 18:14:25,654 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45593,1689963261589] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-21 18:14:25,660 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45593] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:25,660 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45593,1689963261589] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 18:14:25,660 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45593,1689963261589] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-21 18:14:25,661 DEBUG [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966 2023-07-21 18:14:25,661 DEBUG [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37139 2023-07-21 18:14:25,661 DEBUG [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37861 2023-07-21 18:14:25,661 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45593] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:25,661 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45593,1689963261589] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 18:14:25,662 DEBUG [RS:0;jenkins-hbase4:43419] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966 2023-07-21 18:14:25,662 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45593,1689963261589] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-21 18:14:25,662 DEBUG [RS:0;jenkins-hbase4:43419] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37139 2023-07-21 18:14:25,662 DEBUG [RS:0;jenkins-hbase4:43419] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37861 2023-07-21 18:14:25,667 DEBUG [RS:2;jenkins-hbase4:44049] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966 2023-07-21 18:14:25,667 DEBUG [RS:2;jenkins-hbase4:44049] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37139 2023-07-21 18:14:25,667 DEBUG [RS:2;jenkins-hbase4:44049] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37861 2023-07-21 18:14:25,674 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:14:25,674 DEBUG [RS:1;jenkins-hbase4:46437] zookeeper.ZKUtil(162): regionserver:46437-0x10189176e190002, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:25,675 DEBUG [RS:0;jenkins-hbase4:43419] zookeeper.ZKUtil(162): regionserver:43419-0x10189176e190001, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:25,675 DEBUG [RS:2;jenkins-hbase4:44049] zookeeper.ZKUtil(162): regionserver:44049-0x10189176e190003, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:25,675 WARN [RS:1;jenkins-hbase4:46437] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 18:14:25,675 WARN [RS:2;jenkins-hbase4:44049] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 18:14:25,676 INFO [RS:1;jenkins-hbase4:46437] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 18:14:25,675 WARN [RS:0;jenkins-hbase4:43419] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 18:14:25,676 INFO [RS:2;jenkins-hbase4:44049] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 18:14:25,676 INFO [RS:0;jenkins-hbase4:43419] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 18:14:25,676 DEBUG [RS:2;jenkins-hbase4:44049] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/WALs/jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:25,676 DEBUG [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/WALs/jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:25,677 DEBUG [RS:0;jenkins-hbase4:43419] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/WALs/jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:25,677 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43419,1689963263425] 2023-07-21 18:14:25,677 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46437,1689963263715] 2023-07-21 18:14:25,677 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44049,1689963263942] 2023-07-21 18:14:25,695 DEBUG [RS:1;jenkins-hbase4:46437] zookeeper.ZKUtil(162): regionserver:46437-0x10189176e190002, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:25,695 DEBUG [RS:0;jenkins-hbase4:43419] zookeeper.ZKUtil(162): regionserver:43419-0x10189176e190001, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:25,695 DEBUG [RS:2;jenkins-hbase4:44049] zookeeper.ZKUtil(162): regionserver:44049-0x10189176e190003, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:25,695 DEBUG [RS:1;jenkins-hbase4:46437] zookeeper.ZKUtil(162): regionserver:46437-0x10189176e190002, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:25,696 DEBUG [RS:2;jenkins-hbase4:44049] zookeeper.ZKUtil(162): regionserver:44049-0x10189176e190003, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:25,695 DEBUG [RS:0;jenkins-hbase4:43419] zookeeper.ZKUtil(162): regionserver:43419-0x10189176e190001, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:25,696 DEBUG [RS:2;jenkins-hbase4:44049] zookeeper.ZKUtil(162): regionserver:44049-0x10189176e190003, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:25,696 DEBUG [RS:1;jenkins-hbase4:46437] zookeeper.ZKUtil(162): regionserver:46437-0x10189176e190002, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:25,696 DEBUG [RS:0;jenkins-hbase4:43419] zookeeper.ZKUtil(162): regionserver:43419-0x10189176e190001, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:25,710 DEBUG [RS:1;jenkins-hbase4:46437] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 18:14:25,710 DEBUG [RS:2;jenkins-hbase4:44049] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 18:14:25,710 DEBUG [RS:0;jenkins-hbase4:43419] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 18:14:25,722 INFO [RS:1;jenkins-hbase4:46437] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 18:14:25,722 INFO [RS:2;jenkins-hbase4:44049] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 18:14:25,722 INFO [RS:0;jenkins-hbase4:43419] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 18:14:25,753 INFO [RS:1;jenkins-hbase4:46437] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 18:14:25,753 INFO [RS:2;jenkins-hbase4:44049] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 18:14:25,753 INFO [RS:0;jenkins-hbase4:43419] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 18:14:25,763 INFO [RS:2;jenkins-hbase4:44049] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 18:14:25,763 INFO [RS:1;jenkins-hbase4:46437] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 18:14:25,763 INFO [RS:0;jenkins-hbase4:43419] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 18:14:25,764 INFO [RS:1;jenkins-hbase4:46437] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:25,764 INFO [RS:2;jenkins-hbase4:44049] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:25,764 INFO [RS:0;jenkins-hbase4:43419] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:25,767 INFO [RS:0;jenkins-hbase4:43419] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 18:14:25,767 INFO [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 18:14:25,770 INFO [RS:2;jenkins-hbase4:44049] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 18:14:25,777 DEBUG [jenkins-hbase4:45593] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 18:14:25,783 INFO [RS:2;jenkins-hbase4:44049] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:25,783 INFO [RS:1;jenkins-hbase4:46437] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:25,783 INFO [RS:0;jenkins-hbase4:43419] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:25,784 DEBUG [RS:2;jenkins-hbase4:44049] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,784 DEBUG [RS:1;jenkins-hbase4:46437] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,785 DEBUG [RS:2;jenkins-hbase4:44049] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,785 DEBUG [RS:1;jenkins-hbase4:46437] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,785 DEBUG [RS:2;jenkins-hbase4:44049] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,785 DEBUG [RS:0;jenkins-hbase4:43419] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,785 DEBUG [RS:2;jenkins-hbase4:44049] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,785 DEBUG [RS:0;jenkins-hbase4:43419] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,785 DEBUG [RS:2;jenkins-hbase4:44049] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,785 DEBUG [RS:0;jenkins-hbase4:43419] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,785 DEBUG [RS:2;jenkins-hbase4:44049] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 18:14:25,785 DEBUG [RS:0;jenkins-hbase4:43419] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,785 DEBUG [RS:2;jenkins-hbase4:44049] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,785 DEBUG [RS:0;jenkins-hbase4:43419] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,785 DEBUG [RS:2;jenkins-hbase4:44049] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,786 DEBUG [RS:0;jenkins-hbase4:43419] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 18:14:25,786 DEBUG [RS:2;jenkins-hbase4:44049] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,786 DEBUG [RS:0;jenkins-hbase4:43419] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,786 DEBUG [RS:2;jenkins-hbase4:44049] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,786 DEBUG [RS:0;jenkins-hbase4:43419] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,786 DEBUG [RS:0;jenkins-hbase4:43419] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,786 DEBUG [RS:0;jenkins-hbase4:43419] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,785 DEBUG [RS:1;jenkins-hbase4:46437] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,787 DEBUG [RS:1;jenkins-hbase4:46437] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,787 DEBUG [RS:1;jenkins-hbase4:46437] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,787 DEBUG [RS:1;jenkins-hbase4:46437] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 18:14:25,787 DEBUG [RS:1;jenkins-hbase4:46437] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,788 DEBUG [RS:1;jenkins-hbase4:46437] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,788 DEBUG [RS:1;jenkins-hbase4:46437] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,788 DEBUG [RS:1;jenkins-hbase4:46437] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:25,802 INFO [RS:2;jenkins-hbase4:44049] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:25,802 INFO [RS:2;jenkins-hbase4:44049] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:25,802 INFO [RS:2;jenkins-hbase4:44049] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:25,804 INFO [RS:1;jenkins-hbase4:46437] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:25,804 INFO [RS:1;jenkins-hbase4:46437] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:25,804 INFO [RS:1;jenkins-hbase4:46437] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:25,802 DEBUG [jenkins-hbase4:45593] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 18:14:25,870 INFO [RS:0;jenkins-hbase4:43419] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:25,870 INFO [RS:0;jenkins-hbase4:43419] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:25,870 INFO [RS:0;jenkins-hbase4:43419] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:25,872 DEBUG [jenkins-hbase4:45593] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 18:14:25,872 DEBUG [jenkins-hbase4:45593] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 18:14:25,872 DEBUG [jenkins-hbase4:45593] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 18:14:25,872 DEBUG [jenkins-hbase4:45593] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 18:14:25,881 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43419,1689963263425, state=OPENING 2023-07-21 18:14:25,895 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-21 18:14:25,895 INFO [RS:0;jenkins-hbase4:43419] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 18:14:25,896 INFO [RS:1;jenkins-hbase4:46437] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 18:14:25,895 INFO [RS:2;jenkins-hbase4:44049] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 18:14:25,900 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:14:25,901 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 18:14:25,902 INFO [RS:0;jenkins-hbase4:43419] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43419,1689963263425-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:25,903 INFO [RS:2;jenkins-hbase4:44049] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44049,1689963263942-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:25,903 INFO [RS:1;jenkins-hbase4:46437] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46437,1689963263715-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:25,906 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43419,1689963263425}] 2023-07-21 18:14:25,932 INFO [RS:2;jenkins-hbase4:44049] regionserver.Replication(203): jenkins-hbase4.apache.org,44049,1689963263942 started 2023-07-21 18:14:25,932 INFO [RS:2;jenkins-hbase4:44049] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44049,1689963263942, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44049, sessionid=0x10189176e190003 2023-07-21 18:14:25,933 INFO [RS:0;jenkins-hbase4:43419] regionserver.Replication(203): jenkins-hbase4.apache.org,43419,1689963263425 started 2023-07-21 18:14:25,933 DEBUG [RS:2;jenkins-hbase4:44049] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 18:14:25,933 INFO [RS:0;jenkins-hbase4:43419] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43419,1689963263425, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43419, sessionid=0x10189176e190001 2023-07-21 18:14:25,933 DEBUG [RS:2;jenkins-hbase4:44049] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:25,933 DEBUG [RS:0;jenkins-hbase4:43419] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 18:14:25,933 INFO [RS:1;jenkins-hbase4:46437] regionserver.Replication(203): jenkins-hbase4.apache.org,46437,1689963263715 started 2023-07-21 18:14:25,933 DEBUG [RS:0;jenkins-hbase4:43419] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:25,933 DEBUG [RS:2;jenkins-hbase4:44049] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44049,1689963263942' 2023-07-21 18:14:25,934 DEBUG [RS:0;jenkins-hbase4:43419] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43419,1689963263425' 2023-07-21 18:14:25,934 INFO [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46437,1689963263715, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46437, sessionid=0x10189176e190002 2023-07-21 18:14:25,935 DEBUG [RS:0;jenkins-hbase4:43419] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 18:14:25,935 DEBUG [RS:1;jenkins-hbase4:46437] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 18:14:25,935 DEBUG [RS:2;jenkins-hbase4:44049] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 18:14:25,935 DEBUG [RS:1;jenkins-hbase4:46437] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:25,935 DEBUG [RS:1;jenkins-hbase4:46437] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46437,1689963263715' 2023-07-21 18:14:25,936 DEBUG [RS:1;jenkins-hbase4:46437] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 18:14:25,936 DEBUG [RS:0;jenkins-hbase4:43419] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 18:14:25,936 DEBUG [RS:2;jenkins-hbase4:44049] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 18:14:25,936 DEBUG [RS:1;jenkins-hbase4:46437] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 18:14:25,937 DEBUG [RS:0;jenkins-hbase4:43419] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 18:14:25,937 DEBUG [RS:0;jenkins-hbase4:43419] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 18:14:25,937 DEBUG [RS:0;jenkins-hbase4:43419] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:25,937 DEBUG [RS:0;jenkins-hbase4:43419] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43419,1689963263425' 2023-07-21 18:14:25,937 DEBUG [RS:0;jenkins-hbase4:43419] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 18:14:25,937 DEBUG [RS:1;jenkins-hbase4:46437] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 18:14:25,937 DEBUG [RS:1;jenkins-hbase4:46437] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 18:14:25,937 DEBUG [RS:1;jenkins-hbase4:46437] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:25,940 DEBUG [RS:1;jenkins-hbase4:46437] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46437,1689963263715' 2023-07-21 18:14:25,940 DEBUG [RS:2;jenkins-hbase4:44049] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 18:14:25,940 DEBUG [RS:1;jenkins-hbase4:46437] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 18:14:25,941 DEBUG [RS:0;jenkins-hbase4:43419] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 18:14:25,941 DEBUG [RS:2;jenkins-hbase4:44049] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 18:14:25,941 DEBUG [RS:2;jenkins-hbase4:44049] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:25,941 DEBUG [RS:2;jenkins-hbase4:44049] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44049,1689963263942' 2023-07-21 18:14:25,941 DEBUG [RS:2;jenkins-hbase4:44049] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 18:14:25,941 DEBUG [RS:1;jenkins-hbase4:46437] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 18:14:25,941 DEBUG [RS:2;jenkins-hbase4:44049] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 18:14:25,941 DEBUG [RS:0;jenkins-hbase4:43419] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 18:14:25,942 INFO [RS:0;jenkins-hbase4:43419] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 18:14:25,942 INFO [RS:0;jenkins-hbase4:43419] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 18:14:25,942 DEBUG [RS:1;jenkins-hbase4:46437] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 18:14:25,942 DEBUG [RS:2;jenkins-hbase4:44049] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 18:14:25,942 INFO [RS:1;jenkins-hbase4:46437] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 18:14:25,942 INFO [RS:2;jenkins-hbase4:44049] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 18:14:25,942 INFO [RS:2;jenkins-hbase4:44049] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 18:14:25,942 INFO [RS:1;jenkins-hbase4:46437] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 18:14:25,998 WARN [ReadOnlyZKClient-127.0.0.1:64847@0x08bf68d6] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-21 18:14:26,027 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45593,1689963261589] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 18:14:26,031 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35322, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 18:14:26,032 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43419] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:35322 deadline: 1689963326032, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:26,054 INFO [RS:0;jenkins-hbase4:43419] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43419%2C1689963263425, suffix=, logDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/WALs/jenkins-hbase4.apache.org,43419,1689963263425, archiveDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/oldWALs, maxLogs=32 2023-07-21 18:14:26,054 INFO [RS:1;jenkins-hbase4:46437] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46437%2C1689963263715, suffix=, logDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/WALs/jenkins-hbase4.apache.org,46437,1689963263715, archiveDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/oldWALs, maxLogs=32 2023-07-21 18:14:26,054 INFO [RS:2;jenkins-hbase4:44049] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44049%2C1689963263942, suffix=, logDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/WALs/jenkins-hbase4.apache.org,44049,1689963263942, archiveDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/oldWALs, maxLogs=32 2023-07-21 18:14:26,089 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43391,DS-88c8b8eb-df08-4ca1-8ed0-e6d154d2ee62,DISK] 2023-07-21 18:14:26,098 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:26,113 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 18:14:26,114 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35467,DS-c8b5d9c7-8b97-4bf3-be00-3803219852c7,DISK] 2023-07-21 18:14:26,115 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33205,DS-86b9fc6a-e29a-4a47-b59b-dae87071f69c,DISK] 2023-07-21 18:14:26,128 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35467,DS-c8b5d9c7-8b97-4bf3-be00-3803219852c7,DISK] 2023-07-21 18:14:26,130 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33205,DS-86b9fc6a-e29a-4a47-b59b-dae87071f69c,DISK] 2023-07-21 18:14:26,130 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43391,DS-88c8b8eb-df08-4ca1-8ed0-e6d154d2ee62,DISK] 2023-07-21 18:14:26,132 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43391,DS-88c8b8eb-df08-4ca1-8ed0-e6d154d2ee62,DISK] 2023-07-21 18:14:26,134 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35467,DS-c8b5d9c7-8b97-4bf3-be00-3803219852c7,DISK] 2023-07-21 18:14:26,134 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33205,DS-86b9fc6a-e29a-4a47-b59b-dae87071f69c,DISK] 2023-07-21 18:14:26,136 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35334, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 18:14:26,144 INFO [RS:0;jenkins-hbase4:43419] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/WALs/jenkins-hbase4.apache.org,43419,1689963263425/jenkins-hbase4.apache.org%2C43419%2C1689963263425.1689963266059 2023-07-21 18:14:26,148 DEBUG [RS:0;jenkins-hbase4:43419] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43391,DS-88c8b8eb-df08-4ca1-8ed0-e6d154d2ee62,DISK], DatanodeInfoWithStorage[127.0.0.1:33205,DS-86b9fc6a-e29a-4a47-b59b-dae87071f69c,DISK], DatanodeInfoWithStorage[127.0.0.1:35467,DS-c8b5d9c7-8b97-4bf3-be00-3803219852c7,DISK]] 2023-07-21 18:14:26,153 INFO [RS:1;jenkins-hbase4:46437] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/WALs/jenkins-hbase4.apache.org,46437,1689963263715/jenkins-hbase4.apache.org%2C46437%2C1689963263715.1689963266059 2023-07-21 18:14:26,154 INFO [RS:2;jenkins-hbase4:44049] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/WALs/jenkins-hbase4.apache.org,44049,1689963263942/jenkins-hbase4.apache.org%2C44049%2C1689963263942.1689963266059 2023-07-21 18:14:26,155 DEBUG [RS:1;jenkins-hbase4:46437] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43391,DS-88c8b8eb-df08-4ca1-8ed0-e6d154d2ee62,DISK], DatanodeInfoWithStorage[127.0.0.1:35467,DS-c8b5d9c7-8b97-4bf3-be00-3803219852c7,DISK], DatanodeInfoWithStorage[127.0.0.1:33205,DS-86b9fc6a-e29a-4a47-b59b-dae87071f69c,DISK]] 2023-07-21 18:14:26,158 DEBUG [RS:2;jenkins-hbase4:44049] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43391,DS-88c8b8eb-df08-4ca1-8ed0-e6d154d2ee62,DISK], DatanodeInfoWithStorage[127.0.0.1:33205,DS-86b9fc6a-e29a-4a47-b59b-dae87071f69c,DISK], DatanodeInfoWithStorage[127.0.0.1:35467,DS-c8b5d9c7-8b97-4bf3-be00-3803219852c7,DISK]] 2023-07-21 18:14:26,171 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 18:14:26,172 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 18:14:26,179 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43419%2C1689963263425.meta, suffix=.meta, logDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/WALs/jenkins-hbase4.apache.org,43419,1689963263425, archiveDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/oldWALs, maxLogs=32 2023-07-21 18:14:26,199 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35467,DS-c8b5d9c7-8b97-4bf3-be00-3803219852c7,DISK] 2023-07-21 18:14:26,206 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43391,DS-88c8b8eb-df08-4ca1-8ed0-e6d154d2ee62,DISK] 2023-07-21 18:14:26,207 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33205,DS-86b9fc6a-e29a-4a47-b59b-dae87071f69c,DISK] 2023-07-21 18:14:26,213 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/WALs/jenkins-hbase4.apache.org,43419,1689963263425/jenkins-hbase4.apache.org%2C43419%2C1689963263425.meta.1689963266181.meta 2023-07-21 18:14:26,214 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35467,DS-c8b5d9c7-8b97-4bf3-be00-3803219852c7,DISK], DatanodeInfoWithStorage[127.0.0.1:33205,DS-86b9fc6a-e29a-4a47-b59b-dae87071f69c,DISK], DatanodeInfoWithStorage[127.0.0.1:43391,DS-88c8b8eb-df08-4ca1-8ed0-e6d154d2ee62,DISK]] 2023-07-21 18:14:26,215 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:14:26,216 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 18:14:26,219 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 18:14:26,221 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 18:14:26,226 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 18:14:26,226 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:26,227 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 18:14:26,227 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 18:14:26,229 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 18:14:26,231 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/info 2023-07-21 18:14:26,231 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/info 2023-07-21 18:14:26,232 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 18:14:26,233 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:26,233 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 18:14:26,234 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/rep_barrier 2023-07-21 18:14:26,234 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/rep_barrier 2023-07-21 18:14:26,235 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 18:14:26,236 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:26,236 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 18:14:26,237 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/table 2023-07-21 18:14:26,237 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/table 2023-07-21 18:14:26,238 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 18:14:26,239 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:26,240 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740 2023-07-21 18:14:26,243 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740 2023-07-21 18:14:26,249 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 18:14:26,254 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 18:14:26,261 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10273845280, jitterRate=-0.043173596262931824}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 18:14:26,261 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 18:14:26,273 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689963266091 2023-07-21 18:14:26,300 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 18:14:26,301 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 18:14:26,303 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43419,1689963263425, state=OPEN 2023-07-21 18:14:26,306 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 18:14:26,306 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 18:14:26,312 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-21 18:14:26,313 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43419,1689963263425 in 401 msec 2023-07-21 18:14:26,322 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-21 18:14:26,322 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 713 msec 2023-07-21 18:14:26,329 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.0670 sec 2023-07-21 18:14:26,330 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689963266329, completionTime=-1 2023-07-21 18:14:26,330 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-21 18:14:26,330 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-21 18:14:26,402 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-21 18:14:26,402 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689963326402 2023-07-21 18:14:26,402 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689963386402 2023-07-21 18:14:26,402 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 72 msec 2023-07-21 18:14:26,425 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45593,1689963261589-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:26,425 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45593,1689963261589-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:26,425 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45593,1689963261589-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:26,428 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:45593, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:26,429 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:26,435 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-21 18:14:26,455 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-21 18:14:26,457 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 18:14:26,472 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-21 18:14:26,475 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 18:14:26,479 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 18:14:26,498 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/hbase/namespace/2a5ec5469486ef5b01d5318bdbcbddf7 2023-07-21 18:14:26,501 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/hbase/namespace/2a5ec5469486ef5b01d5318bdbcbddf7 empty. 2023-07-21 18:14:26,501 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/hbase/namespace/2a5ec5469486ef5b01d5318bdbcbddf7 2023-07-21 18:14:26,502 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-21 18:14:26,553 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45593,1689963261589] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 18:14:26,553 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-21 18:14:26,555 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45593,1689963261589] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-21 18:14:26,557 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2a5ec5469486ef5b01d5318bdbcbddf7, NAME => 'hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp 2023-07-21 18:14:26,559 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 18:14:26,561 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 18:14:26,565 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/hbase/rsgroup/17cd69e9cdda513d9c4530910b66d92e 2023-07-21 18:14:26,567 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/hbase/rsgroup/17cd69e9cdda513d9c4530910b66d92e empty. 2023-07-21 18:14:26,568 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/hbase/rsgroup/17cd69e9cdda513d9c4530910b66d92e 2023-07-21 18:14:26,568 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-21 18:14:26,590 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:26,591 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 2a5ec5469486ef5b01d5318bdbcbddf7, disabling compactions & flushes 2023-07-21 18:14:26,591 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7. 2023-07-21 18:14:26,591 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7. 2023-07-21 18:14:26,591 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7. after waiting 0 ms 2023-07-21 18:14:26,591 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7. 2023-07-21 18:14:26,591 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7. 2023-07-21 18:14:26,591 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 2a5ec5469486ef5b01d5318bdbcbddf7: 2023-07-21 18:14:26,600 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 18:14:26,611 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-21 18:14:26,620 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 17cd69e9cdda513d9c4530910b66d92e, NAME => 'hbase:rsgroup,,1689963266552.17cd69e9cdda513d9c4530910b66d92e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp 2023-07-21 18:14:26,624 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689963266607"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963266607"}]},"ts":"1689963266607"} 2023-07-21 18:14:26,644 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689963266552.17cd69e9cdda513d9c4530910b66d92e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:26,644 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 17cd69e9cdda513d9c4530910b66d92e, disabling compactions & flushes 2023-07-21 18:14:26,644 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689963266552.17cd69e9cdda513d9c4530910b66d92e. 2023-07-21 18:14:26,644 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689963266552.17cd69e9cdda513d9c4530910b66d92e. 2023-07-21 18:14:26,644 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689963266552.17cd69e9cdda513d9c4530910b66d92e. after waiting 0 ms 2023-07-21 18:14:26,644 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689963266552.17cd69e9cdda513d9c4530910b66d92e. 2023-07-21 18:14:26,644 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689963266552.17cd69e9cdda513d9c4530910b66d92e. 2023-07-21 18:14:26,644 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 17cd69e9cdda513d9c4530910b66d92e: 2023-07-21 18:14:26,654 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 18:14:26,656 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689963266552.17cd69e9cdda513d9c4530910b66d92e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689963266655"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963266655"}]},"ts":"1689963266655"} 2023-07-21 18:14:26,661 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 18:14:26,663 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 18:14:26,665 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 18:14:26,668 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 18:14:26,671 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963266669"}]},"ts":"1689963266669"} 2023-07-21 18:14:26,671 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963266663"}]},"ts":"1689963266663"} 2023-07-21 18:14:26,675 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-21 18:14:26,677 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-21 18:14:26,680 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 18:14:26,681 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 18:14:26,681 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 18:14:26,681 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 18:14:26,681 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 18:14:26,684 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=17cd69e9cdda513d9c4530910b66d92e, ASSIGN}] 2023-07-21 18:14:26,684 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 18:14:26,684 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 18:14:26,684 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 18:14:26,684 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 18:14:26,684 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 18:14:26,685 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=2a5ec5469486ef5b01d5318bdbcbddf7, ASSIGN}] 2023-07-21 18:14:26,687 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=17cd69e9cdda513d9c4530910b66d92e, ASSIGN 2023-07-21 18:14:26,690 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=17cd69e9cdda513d9c4530910b66d92e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46437,1689963263715; forceNewPlan=false, retain=false 2023-07-21 18:14:26,690 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=2a5ec5469486ef5b01d5318bdbcbddf7, ASSIGN 2023-07-21 18:14:26,692 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=2a5ec5469486ef5b01d5318bdbcbddf7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44049,1689963263942; forceNewPlan=false, retain=false 2023-07-21 18:14:26,692 INFO [jenkins-hbase4:45593] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-21 18:14:26,694 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=17cd69e9cdda513d9c4530910b66d92e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:26,694 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=2a5ec5469486ef5b01d5318bdbcbddf7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:26,694 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689963266694"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963266694"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963266694"}]},"ts":"1689963266694"} 2023-07-21 18:14:26,694 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689963266552.17cd69e9cdda513d9c4530910b66d92e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689963266694"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963266694"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963266694"}]},"ts":"1689963266694"} 2023-07-21 18:14:26,698 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 17cd69e9cdda513d9c4530910b66d92e, server=jenkins-hbase4.apache.org,46437,1689963263715}] 2023-07-21 18:14:26,702 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 2a5ec5469486ef5b01d5318bdbcbddf7, server=jenkins-hbase4.apache.org,44049,1689963263942}] 2023-07-21 18:14:26,859 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:26,859 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 18:14:26,861 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:26,861 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 18:14:26,867 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47730, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 18:14:26,867 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42232, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 18:14:26,876 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689963266552.17cd69e9cdda513d9c4530910b66d92e. 2023-07-21 18:14:26,876 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 17cd69e9cdda513d9c4530910b66d92e, NAME => 'hbase:rsgroup,,1689963266552.17cd69e9cdda513d9c4530910b66d92e.', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:14:26,877 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 18:14:26,877 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689963266552.17cd69e9cdda513d9c4530910b66d92e. service=MultiRowMutationService 2023-07-21 18:14:26,878 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 18:14:26,880 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 17cd69e9cdda513d9c4530910b66d92e 2023-07-21 18:14:26,880 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689963266552.17cd69e9cdda513d9c4530910b66d92e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:26,880 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 17cd69e9cdda513d9c4530910b66d92e 2023-07-21 18:14:26,881 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 17cd69e9cdda513d9c4530910b66d92e 2023-07-21 18:14:26,882 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7. 2023-07-21 18:14:26,883 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2a5ec5469486ef5b01d5318bdbcbddf7, NAME => 'hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7.', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:14:26,883 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 2a5ec5469486ef5b01d5318bdbcbddf7 2023-07-21 18:14:26,883 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:26,883 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2a5ec5469486ef5b01d5318bdbcbddf7 2023-07-21 18:14:26,883 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2a5ec5469486ef5b01d5318bdbcbddf7 2023-07-21 18:14:26,887 INFO [StoreOpener-17cd69e9cdda513d9c4530910b66d92e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 17cd69e9cdda513d9c4530910b66d92e 2023-07-21 18:14:26,888 INFO [StoreOpener-2a5ec5469486ef5b01d5318bdbcbddf7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 2a5ec5469486ef5b01d5318bdbcbddf7 2023-07-21 18:14:26,890 DEBUG [StoreOpener-17cd69e9cdda513d9c4530910b66d92e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/rsgroup/17cd69e9cdda513d9c4530910b66d92e/m 2023-07-21 18:14:26,890 DEBUG [StoreOpener-17cd69e9cdda513d9c4530910b66d92e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/rsgroup/17cd69e9cdda513d9c4530910b66d92e/m 2023-07-21 18:14:26,891 DEBUG [StoreOpener-2a5ec5469486ef5b01d5318bdbcbddf7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/namespace/2a5ec5469486ef5b01d5318bdbcbddf7/info 2023-07-21 18:14:26,891 INFO [StoreOpener-17cd69e9cdda513d9c4530910b66d92e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 17cd69e9cdda513d9c4530910b66d92e columnFamilyName m 2023-07-21 18:14:26,891 DEBUG [StoreOpener-2a5ec5469486ef5b01d5318bdbcbddf7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/namespace/2a5ec5469486ef5b01d5318bdbcbddf7/info 2023-07-21 18:14:26,892 INFO [StoreOpener-2a5ec5469486ef5b01d5318bdbcbddf7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2a5ec5469486ef5b01d5318bdbcbddf7 columnFamilyName info 2023-07-21 18:14:26,892 INFO [StoreOpener-17cd69e9cdda513d9c4530910b66d92e-1] regionserver.HStore(310): Store=17cd69e9cdda513d9c4530910b66d92e/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:26,893 INFO [StoreOpener-2a5ec5469486ef5b01d5318bdbcbddf7-1] regionserver.HStore(310): Store=2a5ec5469486ef5b01d5318bdbcbddf7/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:26,895 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/rsgroup/17cd69e9cdda513d9c4530910b66d92e 2023-07-21 18:14:26,896 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/namespace/2a5ec5469486ef5b01d5318bdbcbddf7 2023-07-21 18:14:26,896 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/rsgroup/17cd69e9cdda513d9c4530910b66d92e 2023-07-21 18:14:26,896 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/namespace/2a5ec5469486ef5b01d5318bdbcbddf7 2023-07-21 18:14:26,901 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 17cd69e9cdda513d9c4530910b66d92e 2023-07-21 18:14:26,904 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2a5ec5469486ef5b01d5318bdbcbddf7 2023-07-21 18:14:26,909 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/namespace/2a5ec5469486ef5b01d5318bdbcbddf7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:26,910 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2a5ec5469486ef5b01d5318bdbcbddf7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10419207840, jitterRate=-0.029635652899742126}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:26,910 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2a5ec5469486ef5b01d5318bdbcbddf7: 2023-07-21 18:14:26,913 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7., pid=9, masterSystemTime=1689963266861 2023-07-21 18:14:26,915 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/rsgroup/17cd69e9cdda513d9c4530910b66d92e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:26,916 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 17cd69e9cdda513d9c4530910b66d92e; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@4ce0344f, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:26,916 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 17cd69e9cdda513d9c4530910b66d92e: 2023-07-21 18:14:26,918 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689963266552.17cd69e9cdda513d9c4530910b66d92e., pid=8, masterSystemTime=1689963266859 2023-07-21 18:14:26,921 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7. 2023-07-21 18:14:26,922 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7. 2023-07-21 18:14:26,925 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=2a5ec5469486ef5b01d5318bdbcbddf7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:26,925 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689963266552.17cd69e9cdda513d9c4530910b66d92e. 2023-07-21 18:14:26,925 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689963266924"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963266924"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963266924"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963266924"}]},"ts":"1689963266924"} 2023-07-21 18:14:26,926 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689963266552.17cd69e9cdda513d9c4530910b66d92e. 2023-07-21 18:14:26,928 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=17cd69e9cdda513d9c4530910b66d92e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:26,929 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689963266552.17cd69e9cdda513d9c4530910b66d92e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689963266928"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963266928"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963266928"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963266928"}]},"ts":"1689963266928"} 2023-07-21 18:14:26,937 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-21 18:14:26,937 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 2a5ec5469486ef5b01d5318bdbcbddf7, server=jenkins-hbase4.apache.org,44049,1689963263942 in 228 msec 2023-07-21 18:14:26,944 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-21 18:14:26,945 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 17cd69e9cdda513d9c4530910b66d92e, server=jenkins-hbase4.apache.org,46437,1689963263715 in 238 msec 2023-07-21 18:14:26,945 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=4 2023-07-21 18:14:26,945 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=2a5ec5469486ef5b01d5318bdbcbddf7, ASSIGN in 252 msec 2023-07-21 18:14:26,947 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 18:14:26,947 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963266947"}]},"ts":"1689963266947"} 2023-07-21 18:14:26,950 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-21 18:14:26,950 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=17cd69e9cdda513d9c4530910b66d92e, ASSIGN in 261 msec 2023-07-21 18:14:26,955 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-21 18:14:26,955 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 18:14:26,956 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963266955"}]},"ts":"1689963266955"} 2023-07-21 18:14:26,959 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-21 18:14:26,959 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 18:14:26,968 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 18:14:26,968 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 501 msec 2023-07-21 18:14:26,971 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 415 msec 2023-07-21 18:14:26,976 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-21 18:14:26,978 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-21 18:14:26,978 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:14:27,007 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 18:14:27,011 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42246, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 18:14:27,029 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-21 18:14:27,063 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45593,1689963261589] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 18:14:27,071 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47740, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 18:14:27,078 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45593,1689963261589] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-21 18:14:27,078 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45593,1689963261589] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-21 18:14:27,082 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 18:14:27,094 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 71 msec 2023-07-21 18:14:27,103 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 18:14:27,119 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 18:14:27,128 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 26 msec 2023-07-21 18:14:27,139 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 18:14:27,142 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-21 18:14:27,143 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.963sec 2023-07-21 18:14:27,145 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-21 18:14:27,147 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-21 18:14:27,147 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-21 18:14:27,149 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45593,1689963261589-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-21 18:14:27,150 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45593,1689963261589-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-21 18:14:27,159 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-21 18:14:27,164 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:14:27,164 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45593,1689963261589] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:27,166 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45593,1689963261589] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 18:14:27,173 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45593,1689963261589] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-21 18:14:27,183 DEBUG [Listener at localhost/36435] zookeeper.ReadOnlyZKClient(139): Connect 0x24795960 to 127.0.0.1:64847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 18:14:27,187 DEBUG [Listener at localhost/36435] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@41cba5e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 18:14:27,206 DEBUG [hconnection-0x576859ba-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 18:14:27,222 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35338, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 18:14:27,236 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,45593,1689963261589 2023-07-21 18:14:27,238 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:27,250 DEBUG [Listener at localhost/36435] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-21 18:14:27,253 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53692, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-21 18:14:27,271 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-21 18:14:27,271 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:14:27,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-21 18:14:27,280 DEBUG [Listener at localhost/36435] zookeeper.ReadOnlyZKClient(139): Connect 0x04b61423 to 127.0.0.1:64847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 18:14:27,287 DEBUG [Listener at localhost/36435] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@34356257, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 18:14:27,288 INFO [Listener at localhost/36435] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:64847 2023-07-21 18:14:27,291 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 18:14:27,306 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10189176e19000a connected 2023-07-21 18:14:27,331 INFO [Listener at localhost/36435] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=423, OpenFileDescriptor=682, MaxFileDescriptor=60000, SystemLoadAverage=605, ProcessCount=174, AvailableMemoryMB=8524 2023-07-21 18:14:27,333 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-21 18:14:27,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:27,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:27,415 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-21 18:14:27,427 INFO [Listener at localhost/36435] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 18:14:27,428 INFO [Listener at localhost/36435] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:27,428 INFO [Listener at localhost/36435] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:27,428 INFO [Listener at localhost/36435] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 18:14:27,428 INFO [Listener at localhost/36435] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:27,428 INFO [Listener at localhost/36435] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 18:14:27,428 INFO [Listener at localhost/36435] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 18:14:27,431 INFO [Listener at localhost/36435] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41863 2023-07-21 18:14:27,432 INFO [Listener at localhost/36435] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 18:14:27,435 DEBUG [Listener at localhost/36435] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 18:14:27,437 INFO [Listener at localhost/36435] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:27,442 INFO [Listener at localhost/36435] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:27,446 INFO [Listener at localhost/36435] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41863 connecting to ZooKeeper ensemble=127.0.0.1:64847 2023-07-21 18:14:27,451 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:418630x0, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 18:14:27,453 DEBUG [Listener at localhost/36435] zookeeper.ZKUtil(162): regionserver:418630x0, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 18:14:27,454 DEBUG [Listener at localhost/36435] zookeeper.ZKUtil(162): regionserver:418630x0, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-21 18:14:27,455 DEBUG [Listener at localhost/36435] zookeeper.ZKUtil(164): regionserver:418630x0, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 18:14:27,459 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41863-0x10189176e19000b connected 2023-07-21 18:14:27,463 DEBUG [Listener at localhost/36435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41863 2023-07-21 18:14:27,466 DEBUG [Listener at localhost/36435] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41863 2023-07-21 18:14:27,468 DEBUG [Listener at localhost/36435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41863 2023-07-21 18:14:27,470 DEBUG [Listener at localhost/36435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41863 2023-07-21 18:14:27,471 DEBUG [Listener at localhost/36435] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41863 2023-07-21 18:14:27,474 INFO [Listener at localhost/36435] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 18:14:27,474 INFO [Listener at localhost/36435] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 18:14:27,474 INFO [Listener at localhost/36435] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 18:14:27,475 INFO [Listener at localhost/36435] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 18:14:27,475 INFO [Listener at localhost/36435] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 18:14:27,475 INFO [Listener at localhost/36435] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 18:14:27,475 INFO [Listener at localhost/36435] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 18:14:27,476 INFO [Listener at localhost/36435] http.HttpServer(1146): Jetty bound to port 42549 2023-07-21 18:14:27,476 INFO [Listener at localhost/36435] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 18:14:27,478 INFO [Listener at localhost/36435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:27,478 INFO [Listener at localhost/36435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6893d3c2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/hadoop.log.dir/,AVAILABLE} 2023-07-21 18:14:27,478 INFO [Listener at localhost/36435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:27,479 INFO [Listener at localhost/36435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6e82bdde{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 18:14:27,607 INFO [Listener at localhost/36435] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 18:14:27,608 INFO [Listener at localhost/36435] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 18:14:27,609 INFO [Listener at localhost/36435] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 18:14:27,609 INFO [Listener at localhost/36435] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 18:14:27,633 INFO [Listener at localhost/36435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:27,635 INFO [Listener at localhost/36435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1e4b632f{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/java.io.tmpdir/jetty-0_0_0_0-42549-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6593570955651768274/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 18:14:27,637 INFO [Listener at localhost/36435] server.AbstractConnector(333): Started ServerConnector@b0e8cf{HTTP/1.1, (http/1.1)}{0.0.0.0:42549} 2023-07-21 18:14:27,637 INFO [Listener at localhost/36435] server.Server(415): Started @11771ms 2023-07-21 18:14:27,656 INFO [RS:3;jenkins-hbase4:41863] regionserver.HRegionServer(951): ClusterId : addc2e3b-8a1c-427b-bbf5-08971cf01958 2023-07-21 18:14:27,658 DEBUG [RS:3;jenkins-hbase4:41863] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 18:14:27,660 DEBUG [RS:3;jenkins-hbase4:41863] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 18:14:27,660 DEBUG [RS:3;jenkins-hbase4:41863] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 18:14:27,663 DEBUG [RS:3;jenkins-hbase4:41863] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 18:14:27,664 DEBUG [RS:3;jenkins-hbase4:41863] zookeeper.ReadOnlyZKClient(139): Connect 0x694ffec5 to 127.0.0.1:64847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 18:14:27,717 DEBUG [RS:3;jenkins-hbase4:41863] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@67881d9c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 18:14:27,717 DEBUG [RS:3;jenkins-hbase4:41863] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@63b795b5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 18:14:27,728 DEBUG [RS:3;jenkins-hbase4:41863] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:41863 2023-07-21 18:14:27,728 INFO [RS:3;jenkins-hbase4:41863] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 18:14:27,728 INFO [RS:3;jenkins-hbase4:41863] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 18:14:27,728 DEBUG [RS:3;jenkins-hbase4:41863] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 18:14:27,730 INFO [RS:3;jenkins-hbase4:41863] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45593,1689963261589 with isa=jenkins-hbase4.apache.org/172.31.14.131:41863, startcode=1689963267427 2023-07-21 18:14:27,730 DEBUG [RS:3;jenkins-hbase4:41863] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 18:14:27,734 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36649, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 18:14:27,735 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45593] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:27,736 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45593,1689963261589] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 18:14:27,736 DEBUG [RS:3;jenkins-hbase4:41863] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966 2023-07-21 18:14:27,736 DEBUG [RS:3;jenkins-hbase4:41863] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37139 2023-07-21 18:14:27,736 DEBUG [RS:3;jenkins-hbase4:41863] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37861 2023-07-21 18:14:27,747 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:14:27,747 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:44049-0x10189176e190003, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:14:27,748 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45593,1689963261589] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:27,747 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:46437-0x10189176e190002, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:14:27,747 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:43419-0x10189176e190001, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:14:27,748 DEBUG [RS:3;jenkins-hbase4:41863] zookeeper.ZKUtil(162): regionserver:41863-0x10189176e19000b, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:27,749 WARN [RS:3;jenkins-hbase4:41863] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 18:14:27,748 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45593,1689963261589] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 18:14:27,749 INFO [RS:3;jenkins-hbase4:41863] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 18:14:27,749 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46437-0x10189176e190002, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:27,748 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41863,1689963267427] 2023-07-21 18:14:27,749 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43419-0x10189176e190001, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:27,749 DEBUG [RS:3;jenkins-hbase4:41863] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/WALs/jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:27,749 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44049-0x10189176e190003, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:27,750 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46437-0x10189176e190002, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:27,757 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43419-0x10189176e190001, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:27,757 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46437-0x10189176e190002, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:27,757 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44049-0x10189176e190003, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:27,757 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45593,1689963261589] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-21 18:14:27,759 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43419-0x10189176e190001, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:27,759 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46437-0x10189176e190002, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:27,759 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44049-0x10189176e190003, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:27,763 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43419-0x10189176e190001, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:27,763 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44049-0x10189176e190003, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:27,770 DEBUG [RS:3;jenkins-hbase4:41863] zookeeper.ZKUtil(162): regionserver:41863-0x10189176e19000b, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:27,770 DEBUG [RS:3;jenkins-hbase4:41863] zookeeper.ZKUtil(162): regionserver:41863-0x10189176e19000b, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:27,771 DEBUG [RS:3;jenkins-hbase4:41863] zookeeper.ZKUtil(162): regionserver:41863-0x10189176e19000b, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:27,771 DEBUG [RS:3;jenkins-hbase4:41863] zookeeper.ZKUtil(162): regionserver:41863-0x10189176e19000b, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:27,773 DEBUG [RS:3;jenkins-hbase4:41863] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 18:14:27,773 INFO [RS:3;jenkins-hbase4:41863] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 18:14:27,777 INFO [RS:3;jenkins-hbase4:41863] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 18:14:27,777 INFO [RS:3;jenkins-hbase4:41863] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 18:14:27,777 INFO [RS:3;jenkins-hbase4:41863] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:27,779 INFO [RS:3;jenkins-hbase4:41863] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 18:14:27,781 INFO [RS:3;jenkins-hbase4:41863] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:27,782 DEBUG [RS:3;jenkins-hbase4:41863] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:27,782 DEBUG [RS:3;jenkins-hbase4:41863] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:27,782 DEBUG [RS:3;jenkins-hbase4:41863] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:27,782 DEBUG [RS:3;jenkins-hbase4:41863] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:27,782 DEBUG [RS:3;jenkins-hbase4:41863] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:27,782 DEBUG [RS:3;jenkins-hbase4:41863] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 18:14:27,782 DEBUG [RS:3;jenkins-hbase4:41863] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:27,783 DEBUG [RS:3;jenkins-hbase4:41863] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:27,783 DEBUG [RS:3;jenkins-hbase4:41863] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:27,783 DEBUG [RS:3;jenkins-hbase4:41863] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:27,792 INFO [RS:3;jenkins-hbase4:41863] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:27,792 INFO [RS:3;jenkins-hbase4:41863] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:27,792 INFO [RS:3;jenkins-hbase4:41863] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:27,807 INFO [RS:3;jenkins-hbase4:41863] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 18:14:27,808 INFO [RS:3;jenkins-hbase4:41863] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41863,1689963267427-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:27,821 INFO [RS:3;jenkins-hbase4:41863] regionserver.Replication(203): jenkins-hbase4.apache.org,41863,1689963267427 started 2023-07-21 18:14:27,821 INFO [RS:3;jenkins-hbase4:41863] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41863,1689963267427, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41863, sessionid=0x10189176e19000b 2023-07-21 18:14:27,821 DEBUG [RS:3;jenkins-hbase4:41863] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 18:14:27,821 DEBUG [RS:3;jenkins-hbase4:41863] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:27,821 DEBUG [RS:3;jenkins-hbase4:41863] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41863,1689963267427' 2023-07-21 18:14:27,821 DEBUG [RS:3;jenkins-hbase4:41863] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 18:14:27,822 DEBUG [RS:3;jenkins-hbase4:41863] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 18:14:27,824 DEBUG [RS:3;jenkins-hbase4:41863] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 18:14:27,825 DEBUG [RS:3;jenkins-hbase4:41863] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 18:14:27,825 DEBUG [RS:3;jenkins-hbase4:41863] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:27,825 DEBUG [RS:3;jenkins-hbase4:41863] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41863,1689963267427' 2023-07-21 18:14:27,825 DEBUG [RS:3;jenkins-hbase4:41863] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 18:14:27,825 DEBUG [RS:3;jenkins-hbase4:41863] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 18:14:27,826 DEBUG [RS:3;jenkins-hbase4:41863] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 18:14:27,826 INFO [RS:3;jenkins-hbase4:41863] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 18:14:27,826 INFO [RS:3;jenkins-hbase4:41863] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 18:14:27,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 18:14:27,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:27,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:27,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:14:27,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:14:27,844 DEBUG [hconnection-0x54c19f7b-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 18:14:27,848 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35340, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 18:14:27,859 DEBUG [hconnection-0x54c19f7b-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 18:14:27,865 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47746, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 18:14:27,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:27,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:27,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45593] to rsgroup master 2023-07-21 18:14:27,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:27,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:53692 deadline: 1689964467878, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. 2023-07-21 18:14:27,881 WARN [Listener at localhost/36435] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 18:14:27,883 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:27,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:27,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:27,886 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419, jenkins-hbase4.apache.org:44049, jenkins-hbase4.apache.org:46437], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 18:14:27,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:14:27,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:27,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:14:27,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:27,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_975014563 2023-07-21 18:14:27,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_975014563 2023-07-21 18:14:27,905 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:27,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:27,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:27,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:14:27,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:27,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:27,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419] to rsgroup Group_testTableMoveTruncateAndDrop_975014563 2023-07-21 18:14:27,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:27,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_975014563 2023-07-21 18:14:27,930 INFO [RS:3;jenkins-hbase4:41863] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41863%2C1689963267427, suffix=, logDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/WALs/jenkins-hbase4.apache.org,41863,1689963267427, archiveDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/oldWALs, maxLogs=32 2023-07-21 18:14:27,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:27,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:27,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_975014563 2023-07-21 18:14:27,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 18:14:27,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 18:14:27,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 18:14:27,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 18:14:27,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 18:14:27,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-21 18:14:27,955 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-21 18:14:27,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-21 18:14:27,956 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43419,1689963263425, state=CLOSING 2023-07-21 18:14:27,958 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 18:14:27,958 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 18:14:27,959 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43419,1689963263425}] 2023-07-21 18:14:27,994 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43391,DS-88c8b8eb-df08-4ca1-8ed0-e6d154d2ee62,DISK] 2023-07-21 18:14:27,996 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33205,DS-86b9fc6a-e29a-4a47-b59b-dae87071f69c,DISK] 2023-07-21 18:14:27,996 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35467,DS-c8b5d9c7-8b97-4bf3-be00-3803219852c7,DISK] 2023-07-21 18:14:28,011 INFO [RS:3;jenkins-hbase4:41863] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/WALs/jenkins-hbase4.apache.org,41863,1689963267427/jenkins-hbase4.apache.org%2C41863%2C1689963267427.1689963267931 2023-07-21 18:14:28,014 DEBUG [RS:3;jenkins-hbase4:41863] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35467,DS-c8b5d9c7-8b97-4bf3-be00-3803219852c7,DISK], DatanodeInfoWithStorage[127.0.0.1:33205,DS-86b9fc6a-e29a-4a47-b59b-dae87071f69c,DISK], DatanodeInfoWithStorage[127.0.0.1:43391,DS-88c8b8eb-df08-4ca1-8ed0-e6d154d2ee62,DISK]] 2023-07-21 18:14:28,133 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-21 18:14:28,134 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 18:14:28,134 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 18:14:28,134 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 18:14:28,134 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 18:14:28,134 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 18:14:28,137 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.49 KB heapSize=5 KB 2023-07-21 18:14:28,304 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.31 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/.tmp/info/a3a400ee97674892b06c033e8612e30c 2023-07-21 18:14:28,390 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=184 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/.tmp/table/c64176eae9d7460fb0e81673ba29f3c1 2023-07-21 18:14:28,405 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/.tmp/info/a3a400ee97674892b06c033e8612e30c as hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/info/a3a400ee97674892b06c033e8612e30c 2023-07-21 18:14:28,415 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/info/a3a400ee97674892b06c033e8612e30c, entries=20, sequenceid=14, filesize=7.0 K 2023-07-21 18:14:28,418 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/.tmp/table/c64176eae9d7460fb0e81673ba29f3c1 as hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/table/c64176eae9d7460fb0e81673ba29f3c1 2023-07-21 18:14:28,428 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/table/c64176eae9d7460fb0e81673ba29f3c1, entries=4, sequenceid=14, filesize=4.8 K 2023-07-21 18:14:28,431 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.49 KB/2550, heapSize ~4.72 KB/4832, currentSize=0 B/0 for 1588230740 in 294ms, sequenceid=14, compaction requested=false 2023-07-21 18:14:28,434 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-21 18:14:28,449 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-07-21 18:14:28,450 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 18:14:28,450 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 18:14:28,451 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 18:14:28,451 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,44049,1689963263942 record at close sequenceid=14 2023-07-21 18:14:28,453 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-21 18:14:28,455 WARN [PEWorker-1] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-21 18:14:28,463 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-21 18:14:28,463 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43419,1689963263425 in 496 msec 2023-07-21 18:14:28,465 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44049,1689963263942; forceNewPlan=false, retain=false 2023-07-21 18:14:28,615 INFO [jenkins-hbase4:45593] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 18:14:28,616 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44049,1689963263942, state=OPENING 2023-07-21 18:14:28,617 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 18:14:28,618 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44049,1689963263942}] 2023-07-21 18:14:28,618 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 18:14:28,778 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 18:14:28,778 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 18:14:28,781 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44049%2C1689963263942.meta, suffix=.meta, logDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/WALs/jenkins-hbase4.apache.org,44049,1689963263942, archiveDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/oldWALs, maxLogs=32 2023-07-21 18:14:28,803 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43391,DS-88c8b8eb-df08-4ca1-8ed0-e6d154d2ee62,DISK] 2023-07-21 18:14:28,804 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33205,DS-86b9fc6a-e29a-4a47-b59b-dae87071f69c,DISK] 2023-07-21 18:14:28,811 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35467,DS-c8b5d9c7-8b97-4bf3-be00-3803219852c7,DISK] 2023-07-21 18:14:28,816 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/WALs/jenkins-hbase4.apache.org,44049,1689963263942/jenkins-hbase4.apache.org%2C44049%2C1689963263942.meta.1689963268782.meta 2023-07-21 18:14:28,819 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43391,DS-88c8b8eb-df08-4ca1-8ed0-e6d154d2ee62,DISK], DatanodeInfoWithStorage[127.0.0.1:33205,DS-86b9fc6a-e29a-4a47-b59b-dae87071f69c,DISK], DatanodeInfoWithStorage[127.0.0.1:35467,DS-c8b5d9c7-8b97-4bf3-be00-3803219852c7,DISK]] 2023-07-21 18:14:28,819 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:14:28,819 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 18:14:28,819 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 18:14:28,819 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 18:14:28,819 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 18:14:28,819 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:28,820 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 18:14:28,820 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 18:14:28,822 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 18:14:28,824 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/info 2023-07-21 18:14:28,824 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/info 2023-07-21 18:14:28,824 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 18:14:28,840 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/info/a3a400ee97674892b06c033e8612e30c 2023-07-21 18:14:28,841 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:28,842 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 18:14:28,843 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/rep_barrier 2023-07-21 18:14:28,843 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/rep_barrier 2023-07-21 18:14:28,844 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 18:14:28,845 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:28,845 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 18:14:28,847 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/table 2023-07-21 18:14:28,847 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/table 2023-07-21 18:14:28,847 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 18:14:28,862 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/table/c64176eae9d7460fb0e81673ba29f3c1 2023-07-21 18:14:28,863 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:28,864 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740 2023-07-21 18:14:28,868 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740 2023-07-21 18:14:28,873 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 18:14:28,876 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 18:14:28,877 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=18; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9634835520, jitterRate=-0.10268601775169373}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 18:14:28,877 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 18:14:28,879 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=14, masterSystemTime=1689963268772 2023-07-21 18:14:28,883 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44049,1689963263942, state=OPEN 2023-07-21 18:14:28,885 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 18:14:28,885 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 18:14:28,886 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 18:14:28,886 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 18:14:28,889 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-21 18:14:28,889 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44049,1689963263942 in 267 msec 2023-07-21 18:14:28,892 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 937 msec 2023-07-21 18:14:28,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-21 18:14:28,957 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41863,1689963267427, jenkins-hbase4.apache.org,43419,1689963263425] are moved back to default 2023-07-21 18:14:28,957 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_975014563 2023-07-21 18:14:28,957 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:28,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:28,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:28,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_975014563 2023-07-21 18:14:28,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:28,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 18:14:28,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 18:14:28,984 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 18:14:28,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 15 2023-07-21 18:14:28,991 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:28,993 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_975014563 2023-07-21 18:14:28,999 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:28,999 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:29,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 18:14:29,009 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 18:14:29,011 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43419] ipc.CallRunner(144): callId: 40 service: ClientService methodName: Get size: 151 connection: 172.31.14.131:35322 deadline: 1689963329010, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44049 startCode=1689963263942. As of locationSeqNum=14. 2023-07-21 18:14:29,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 18:14:29,122 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3e4301fc2d1085820ec6d1b52321eb4b 2023-07-21 18:14:29,127 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b48f4ed9966e226aeaa1fd0b0344705b 2023-07-21 18:14:29,131 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3e4301fc2d1085820ec6d1b52321eb4b empty. 2023-07-21 18:14:29,131 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5eb218bd02142ac1567ed2b3a6712355 2023-07-21 18:14:29,131 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e172c28c803be39cd8711ba6958b193d 2023-07-21 18:14:29,132 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b48f4ed9966e226aeaa1fd0b0344705b empty. 2023-07-21 18:14:29,133 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c45990923a0173d364e05b38989ad464 2023-07-21 18:14:29,134 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b48f4ed9966e226aeaa1fd0b0344705b 2023-07-21 18:14:29,134 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5eb218bd02142ac1567ed2b3a6712355 empty. 2023-07-21 18:14:29,134 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3e4301fc2d1085820ec6d1b52321eb4b 2023-07-21 18:14:29,136 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e172c28c803be39cd8711ba6958b193d empty. 2023-07-21 18:14:29,136 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5eb218bd02142ac1567ed2b3a6712355 2023-07-21 18:14:29,137 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e172c28c803be39cd8711ba6958b193d 2023-07-21 18:14:29,137 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c45990923a0173d364e05b38989ad464 empty. 2023-07-21 18:14:29,137 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c45990923a0173d364e05b38989ad464 2023-07-21 18:14:29,137 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-21 18:14:29,238 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-21 18:14:29,240 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3e4301fc2d1085820ec6d1b52321eb4b, NAME => 'Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp 2023-07-21 18:14:29,243 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => e172c28c803be39cd8711ba6958b193d, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp 2023-07-21 18:14:29,243 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => b48f4ed9966e226aeaa1fd0b0344705b, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp 2023-07-21 18:14:29,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 18:14:29,400 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:29,404 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing b48f4ed9966e226aeaa1fd0b0344705b, disabling compactions & flushes 2023-07-21 18:14:29,405 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b. 2023-07-21 18:14:29,405 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b. 2023-07-21 18:14:29,405 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b. after waiting 0 ms 2023-07-21 18:14:29,406 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b. 2023-07-21 18:14:29,406 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b. 2023-07-21 18:14:29,406 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for b48f4ed9966e226aeaa1fd0b0344705b: 2023-07-21 18:14:29,406 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:29,406 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing e172c28c803be39cd8711ba6958b193d, disabling compactions & flushes 2023-07-21 18:14:29,406 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 5eb218bd02142ac1567ed2b3a6712355, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp 2023-07-21 18:14:29,406 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d. 2023-07-21 18:14:29,407 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d. 2023-07-21 18:14:29,407 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d. after waiting 0 ms 2023-07-21 18:14:29,407 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d. 2023-07-21 18:14:29,407 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d. 2023-07-21 18:14:29,407 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for e172c28c803be39cd8711ba6958b193d: 2023-07-21 18:14:29,407 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => c45990923a0173d364e05b38989ad464, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp 2023-07-21 18:14:29,452 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:29,453 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 5eb218bd02142ac1567ed2b3a6712355, disabling compactions & flushes 2023-07-21 18:14:29,453 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355. 2023-07-21 18:14:29,454 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355. 2023-07-21 18:14:29,454 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355. after waiting 0 ms 2023-07-21 18:14:29,454 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355. 2023-07-21 18:14:29,454 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355. 2023-07-21 18:14:29,454 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 5eb218bd02142ac1567ed2b3a6712355: 2023-07-21 18:14:29,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 18:14:29,792 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:29,792 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 3e4301fc2d1085820ec6d1b52321eb4b, disabling compactions & flushes 2023-07-21 18:14:29,792 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b. 2023-07-21 18:14:29,792 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b. 2023-07-21 18:14:29,792 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b. after waiting 0 ms 2023-07-21 18:14:29,792 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b. 2023-07-21 18:14:29,793 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b. 2023-07-21 18:14:29,793 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 3e4301fc2d1085820ec6d1b52321eb4b: 2023-07-21 18:14:29,850 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:29,850 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing c45990923a0173d364e05b38989ad464, disabling compactions & flushes 2023-07-21 18:14:29,850 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464. 2023-07-21 18:14:29,850 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464. 2023-07-21 18:14:29,850 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464. after waiting 0 ms 2023-07-21 18:14:29,851 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464. 2023-07-21 18:14:29,851 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464. 2023-07-21 18:14:29,851 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for c45990923a0173d364e05b38989ad464: 2023-07-21 18:14:29,860 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 18:14:29,861 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963269861"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963269861"}]},"ts":"1689963269861"} 2023-07-21 18:14:29,861 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963269861"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963269861"}]},"ts":"1689963269861"} 2023-07-21 18:14:29,862 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963269861"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963269861"}]},"ts":"1689963269861"} 2023-07-21 18:14:29,862 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963269861"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963269861"}]},"ts":"1689963269861"} 2023-07-21 18:14:29,862 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963269861"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963269861"}]},"ts":"1689963269861"} 2023-07-21 18:14:29,924 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-21 18:14:29,930 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 18:14:29,930 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963269930"}]},"ts":"1689963269930"} 2023-07-21 18:14:29,937 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-21 18:14:29,942 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 18:14:29,942 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 18:14:29,942 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 18:14:29,942 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 18:14:29,943 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3e4301fc2d1085820ec6d1b52321eb4b, ASSIGN}, {pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b48f4ed9966e226aeaa1fd0b0344705b, ASSIGN}, {pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e172c28c803be39cd8711ba6958b193d, ASSIGN}, {pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5eb218bd02142ac1567ed2b3a6712355, ASSIGN}, {pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c45990923a0173d364e05b38989ad464, ASSIGN}] 2023-07-21 18:14:29,947 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b48f4ed9966e226aeaa1fd0b0344705b, ASSIGN 2023-07-21 18:14:29,947 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3e4301fc2d1085820ec6d1b52321eb4b, ASSIGN 2023-07-21 18:14:29,949 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e172c28c803be39cd8711ba6958b193d, ASSIGN 2023-07-21 18:14:29,949 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5eb218bd02142ac1567ed2b3a6712355, ASSIGN 2023-07-21 18:14:29,950 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3e4301fc2d1085820ec6d1b52321eb4b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46437,1689963263715; forceNewPlan=false, retain=false 2023-07-21 18:14:29,951 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b48f4ed9966e226aeaa1fd0b0344705b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44049,1689963263942; forceNewPlan=false, retain=false 2023-07-21 18:14:29,951 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e172c28c803be39cd8711ba6958b193d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46437,1689963263715; forceNewPlan=false, retain=false 2023-07-21 18:14:29,952 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5eb218bd02142ac1567ed2b3a6712355, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44049,1689963263942; forceNewPlan=false, retain=false 2023-07-21 18:14:29,952 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c45990923a0173d364e05b38989ad464, ASSIGN 2023-07-21 18:14:29,954 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c45990923a0173d364e05b38989ad464, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44049,1689963263942; forceNewPlan=false, retain=false 2023-07-21 18:14:30,101 INFO [jenkins-hbase4:45593] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-21 18:14:30,104 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=e172c28c803be39cd8711ba6958b193d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:30,104 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=3e4301fc2d1085820ec6d1b52321eb4b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:30,104 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=c45990923a0173d364e05b38989ad464, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:30,104 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=5eb218bd02142ac1567ed2b3a6712355, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:30,104 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=b48f4ed9966e226aeaa1fd0b0344705b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:30,105 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963270104"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963270104"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963270104"}]},"ts":"1689963270104"} 2023-07-21 18:14:30,105 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963270104"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963270104"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963270104"}]},"ts":"1689963270104"} 2023-07-21 18:14:30,105 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963270104"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963270104"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963270104"}]},"ts":"1689963270104"} 2023-07-21 18:14:30,105 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963270104"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963270104"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963270104"}]},"ts":"1689963270104"} 2023-07-21 18:14:30,105 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963270104"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963270104"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963270104"}]},"ts":"1689963270104"} 2023-07-21 18:14:30,107 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE; OpenRegionProcedure c45990923a0173d364e05b38989ad464, server=jenkins-hbase4.apache.org,44049,1689963263942}] 2023-07-21 18:14:30,109 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=18, state=RUNNABLE; OpenRegionProcedure e172c28c803be39cd8711ba6958b193d, server=jenkins-hbase4.apache.org,46437,1689963263715}] 2023-07-21 18:14:30,110 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=23, ppid=19, state=RUNNABLE; OpenRegionProcedure 5eb218bd02142ac1567ed2b3a6712355, server=jenkins-hbase4.apache.org,44049,1689963263942}] 2023-07-21 18:14:30,112 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=16, state=RUNNABLE; OpenRegionProcedure 3e4301fc2d1085820ec6d1b52321eb4b, server=jenkins-hbase4.apache.org,46437,1689963263715}] 2023-07-21 18:14:30,113 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=17, state=RUNNABLE; OpenRegionProcedure b48f4ed9966e226aeaa1fd0b0344705b, server=jenkins-hbase4.apache.org,44049,1689963263942}] 2023-07-21 18:14:30,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 18:14:30,268 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b. 2023-07-21 18:14:30,268 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b48f4ed9966e226aeaa1fd0b0344705b, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-21 18:14:30,268 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b48f4ed9966e226aeaa1fd0b0344705b 2023-07-21 18:14:30,268 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b. 2023-07-21 18:14:30,268 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:30,269 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3e4301fc2d1085820ec6d1b52321eb4b, NAME => 'Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-21 18:14:30,269 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b48f4ed9966e226aeaa1fd0b0344705b 2023-07-21 18:14:30,269 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b48f4ed9966e226aeaa1fd0b0344705b 2023-07-21 18:14:30,269 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 3e4301fc2d1085820ec6d1b52321eb4b 2023-07-21 18:14:30,269 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:30,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3e4301fc2d1085820ec6d1b52321eb4b 2023-07-21 18:14:30,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3e4301fc2d1085820ec6d1b52321eb4b 2023-07-21 18:14:30,271 INFO [StoreOpener-b48f4ed9966e226aeaa1fd0b0344705b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b48f4ed9966e226aeaa1fd0b0344705b 2023-07-21 18:14:30,272 INFO [StoreOpener-3e4301fc2d1085820ec6d1b52321eb4b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3e4301fc2d1085820ec6d1b52321eb4b 2023-07-21 18:14:30,273 DEBUG [StoreOpener-b48f4ed9966e226aeaa1fd0b0344705b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/b48f4ed9966e226aeaa1fd0b0344705b/f 2023-07-21 18:14:30,274 DEBUG [StoreOpener-b48f4ed9966e226aeaa1fd0b0344705b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/b48f4ed9966e226aeaa1fd0b0344705b/f 2023-07-21 18:14:30,274 DEBUG [StoreOpener-3e4301fc2d1085820ec6d1b52321eb4b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/3e4301fc2d1085820ec6d1b52321eb4b/f 2023-07-21 18:14:30,274 DEBUG [StoreOpener-3e4301fc2d1085820ec6d1b52321eb4b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/3e4301fc2d1085820ec6d1b52321eb4b/f 2023-07-21 18:14:30,274 INFO [StoreOpener-b48f4ed9966e226aeaa1fd0b0344705b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b48f4ed9966e226aeaa1fd0b0344705b columnFamilyName f 2023-07-21 18:14:30,275 INFO [StoreOpener-3e4301fc2d1085820ec6d1b52321eb4b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3e4301fc2d1085820ec6d1b52321eb4b columnFamilyName f 2023-07-21 18:14:30,275 INFO [StoreOpener-b48f4ed9966e226aeaa1fd0b0344705b-1] regionserver.HStore(310): Store=b48f4ed9966e226aeaa1fd0b0344705b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:30,277 INFO [StoreOpener-3e4301fc2d1085820ec6d1b52321eb4b-1] regionserver.HStore(310): Store=3e4301fc2d1085820ec6d1b52321eb4b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:30,278 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/b48f4ed9966e226aeaa1fd0b0344705b 2023-07-21 18:14:30,279 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/b48f4ed9966e226aeaa1fd0b0344705b 2023-07-21 18:14:30,279 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/3e4301fc2d1085820ec6d1b52321eb4b 2023-07-21 18:14:30,280 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/3e4301fc2d1085820ec6d1b52321eb4b 2023-07-21 18:14:30,283 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b48f4ed9966e226aeaa1fd0b0344705b 2023-07-21 18:14:30,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3e4301fc2d1085820ec6d1b52321eb4b 2023-07-21 18:14:30,286 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/b48f4ed9966e226aeaa1fd0b0344705b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:30,286 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/3e4301fc2d1085820ec6d1b52321eb4b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:30,287 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3e4301fc2d1085820ec6d1b52321eb4b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11547840960, jitterRate=0.0754764974117279}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:30,287 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b48f4ed9966e226aeaa1fd0b0344705b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10250015520, jitterRate=-0.04539291560649872}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:30,287 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3e4301fc2d1085820ec6d1b52321eb4b: 2023-07-21 18:14:30,287 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b48f4ed9966e226aeaa1fd0b0344705b: 2023-07-21 18:14:30,288 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b., pid=24, masterSystemTime=1689963270264 2023-07-21 18:14:30,289 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b., pid=25, masterSystemTime=1689963270262 2023-07-21 18:14:30,291 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b. 2023-07-21 18:14:30,291 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b. 2023-07-21 18:14:30,291 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d. 2023-07-21 18:14:30,291 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e172c28c803be39cd8711ba6958b193d, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-21 18:14:30,291 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e172c28c803be39cd8711ba6958b193d 2023-07-21 18:14:30,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:30,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e172c28c803be39cd8711ba6958b193d 2023-07-21 18:14:30,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e172c28c803be39cd8711ba6958b193d 2023-07-21 18:14:30,294 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=3e4301fc2d1085820ec6d1b52321eb4b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:30,294 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963270293"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963270293"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963270293"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963270293"}]},"ts":"1689963270293"} 2023-07-21 18:14:30,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b. 2023-07-21 18:14:30,294 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b. 2023-07-21 18:14:30,294 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464. 2023-07-21 18:14:30,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c45990923a0173d364e05b38989ad464, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-21 18:14:30,295 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop c45990923a0173d364e05b38989ad464 2023-07-21 18:14:30,295 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=b48f4ed9966e226aeaa1fd0b0344705b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:30,295 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:30,298 INFO [StoreOpener-e172c28c803be39cd8711ba6958b193d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e172c28c803be39cd8711ba6958b193d 2023-07-21 18:14:30,298 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963270295"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963270295"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963270295"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963270295"}]},"ts":"1689963270295"} 2023-07-21 18:14:30,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c45990923a0173d364e05b38989ad464 2023-07-21 18:14:30,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c45990923a0173d364e05b38989ad464 2023-07-21 18:14:30,300 INFO [StoreOpener-c45990923a0173d364e05b38989ad464-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c45990923a0173d364e05b38989ad464 2023-07-21 18:14:30,301 DEBUG [StoreOpener-e172c28c803be39cd8711ba6958b193d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/e172c28c803be39cd8711ba6958b193d/f 2023-07-21 18:14:30,301 DEBUG [StoreOpener-e172c28c803be39cd8711ba6958b193d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/e172c28c803be39cd8711ba6958b193d/f 2023-07-21 18:14:30,302 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=16 2023-07-21 18:14:30,302 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=16, state=SUCCESS; OpenRegionProcedure 3e4301fc2d1085820ec6d1b52321eb4b, server=jenkins-hbase4.apache.org,46437,1689963263715 in 184 msec 2023-07-21 18:14:30,303 INFO [StoreOpener-e172c28c803be39cd8711ba6958b193d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e172c28c803be39cd8711ba6958b193d columnFamilyName f 2023-07-21 18:14:30,303 INFO [StoreOpener-e172c28c803be39cd8711ba6958b193d-1] regionserver.HStore(310): Store=e172c28c803be39cd8711ba6958b193d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:30,304 DEBUG [StoreOpener-c45990923a0173d364e05b38989ad464-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/c45990923a0173d364e05b38989ad464/f 2023-07-21 18:14:30,305 DEBUG [StoreOpener-c45990923a0173d364e05b38989ad464-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/c45990923a0173d364e05b38989ad464/f 2023-07-21 18:14:30,306 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/e172c28c803be39cd8711ba6958b193d 2023-07-21 18:14:30,306 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3e4301fc2d1085820ec6d1b52321eb4b, ASSIGN in 360 msec 2023-07-21 18:14:30,306 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=17 2023-07-21 18:14:30,306 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=17, state=SUCCESS; OpenRegionProcedure b48f4ed9966e226aeaa1fd0b0344705b, server=jenkins-hbase4.apache.org,44049,1689963263942 in 187 msec 2023-07-21 18:14:30,307 INFO [StoreOpener-c45990923a0173d364e05b38989ad464-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c45990923a0173d364e05b38989ad464 columnFamilyName f 2023-07-21 18:14:30,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/e172c28c803be39cd8711ba6958b193d 2023-07-21 18:14:30,308 INFO [StoreOpener-c45990923a0173d364e05b38989ad464-1] regionserver.HStore(310): Store=c45990923a0173d364e05b38989ad464/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:30,310 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/c45990923a0173d364e05b38989ad464 2023-07-21 18:14:30,311 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/c45990923a0173d364e05b38989ad464 2023-07-21 18:14:30,311 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b48f4ed9966e226aeaa1fd0b0344705b, ASSIGN in 363 msec 2023-07-21 18:14:30,313 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e172c28c803be39cd8711ba6958b193d 2023-07-21 18:14:30,315 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c45990923a0173d364e05b38989ad464 2023-07-21 18:14:30,316 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/e172c28c803be39cd8711ba6958b193d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:30,316 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e172c28c803be39cd8711ba6958b193d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10934748800, jitterRate=0.018377840518951416}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:30,316 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e172c28c803be39cd8711ba6958b193d: 2023-07-21 18:14:30,318 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d., pid=22, masterSystemTime=1689963270264 2023-07-21 18:14:30,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/c45990923a0173d364e05b38989ad464/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:30,319 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c45990923a0173d364e05b38989ad464; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10670763520, jitterRate=-0.006207704544067383}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:30,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c45990923a0173d364e05b38989ad464: 2023-07-21 18:14:30,320 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464., pid=21, masterSystemTime=1689963270262 2023-07-21 18:14:30,321 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d. 2023-07-21 18:14:30,321 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d. 2023-07-21 18:14:30,322 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=e172c28c803be39cd8711ba6958b193d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:30,322 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963270322"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963270322"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963270322"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963270322"}]},"ts":"1689963270322"} 2023-07-21 18:14:30,322 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464. 2023-07-21 18:14:30,322 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464. 2023-07-21 18:14:30,322 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355. 2023-07-21 18:14:30,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5eb218bd02142ac1567ed2b3a6712355, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-21 18:14:30,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 5eb218bd02142ac1567ed2b3a6712355 2023-07-21 18:14:30,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:30,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5eb218bd02142ac1567ed2b3a6712355 2023-07-21 18:14:30,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5eb218bd02142ac1567ed2b3a6712355 2023-07-21 18:14:30,324 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=c45990923a0173d364e05b38989ad464, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:30,325 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963270323"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963270323"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963270323"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963270323"}]},"ts":"1689963270323"} 2023-07-21 18:14:30,329 INFO [StoreOpener-5eb218bd02142ac1567ed2b3a6712355-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5eb218bd02142ac1567ed2b3a6712355 2023-07-21 18:14:30,329 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=18 2023-07-21 18:14:30,330 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=18, state=SUCCESS; OpenRegionProcedure e172c28c803be39cd8711ba6958b193d, server=jenkins-hbase4.apache.org,46437,1689963263715 in 217 msec 2023-07-21 18:14:30,332 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-21 18:14:30,332 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; OpenRegionProcedure c45990923a0173d364e05b38989ad464, server=jenkins-hbase4.apache.org,44049,1689963263942 in 220 msec 2023-07-21 18:14:30,332 DEBUG [StoreOpener-5eb218bd02142ac1567ed2b3a6712355-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/5eb218bd02142ac1567ed2b3a6712355/f 2023-07-21 18:14:30,332 DEBUG [StoreOpener-5eb218bd02142ac1567ed2b3a6712355-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/5eb218bd02142ac1567ed2b3a6712355/f 2023-07-21 18:14:30,332 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e172c28c803be39cd8711ba6958b193d, ASSIGN in 387 msec 2023-07-21 18:14:30,333 INFO [StoreOpener-5eb218bd02142ac1567ed2b3a6712355-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5eb218bd02142ac1567ed2b3a6712355 columnFamilyName f 2023-07-21 18:14:30,334 INFO [StoreOpener-5eb218bd02142ac1567ed2b3a6712355-1] regionserver.HStore(310): Store=5eb218bd02142ac1567ed2b3a6712355/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:30,334 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c45990923a0173d364e05b38989ad464, ASSIGN in 389 msec 2023-07-21 18:14:30,335 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/5eb218bd02142ac1567ed2b3a6712355 2023-07-21 18:14:30,336 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/5eb218bd02142ac1567ed2b3a6712355 2023-07-21 18:14:30,340 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5eb218bd02142ac1567ed2b3a6712355 2023-07-21 18:14:30,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/5eb218bd02142ac1567ed2b3a6712355/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:30,343 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5eb218bd02142ac1567ed2b3a6712355; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10425041760, jitterRate=-0.029092326760292053}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:30,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5eb218bd02142ac1567ed2b3a6712355: 2023-07-21 18:14:30,344 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355., pid=23, masterSystemTime=1689963270262 2023-07-21 18:14:30,347 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355. 2023-07-21 18:14:30,347 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355. 2023-07-21 18:14:30,347 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=5eb218bd02142ac1567ed2b3a6712355, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:30,348 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963270347"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963270347"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963270347"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963270347"}]},"ts":"1689963270347"} 2023-07-21 18:14:30,355 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=23, resume processing ppid=19 2023-07-21 18:14:30,356 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=19, state=SUCCESS; OpenRegionProcedure 5eb218bd02142ac1567ed2b3a6712355, server=jenkins-hbase4.apache.org,44049,1689963263942 in 240 msec 2023-07-21 18:14:30,359 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=15 2023-07-21 18:14:30,359 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5eb218bd02142ac1567ed2b3a6712355, ASSIGN in 413 msec 2023-07-21 18:14:30,360 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 18:14:30,361 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963270361"}]},"ts":"1689963270361"} 2023-07-21 18:14:30,363 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-21 18:14:30,368 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 18:14:30,384 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 1.3880 sec 2023-07-21 18:14:31,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 18:14:31,128 INFO [Listener at localhost/36435] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 15 completed 2023-07-21 18:14:31,128 DEBUG [Listener at localhost/36435] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-21 18:14:31,129 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:31,130 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43419] ipc.CallRunner(144): callId: 51 service: ClientService methodName: Scan size: 95 connection: 172.31.14.131:35338 deadline: 1689963331130, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44049 startCode=1689963263942. As of locationSeqNum=14. 2023-07-21 18:14:31,232 DEBUG [hconnection-0x576859ba-shared-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 18:14:31,246 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39880, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 18:14:31,280 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-21 18:14:31,284 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:31,285 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-21 18:14:31,289 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:31,296 DEBUG [Listener at localhost/36435] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 18:14:31,300 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36296, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 18:14:31,302 DEBUG [Listener at localhost/36435] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 18:14:31,305 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42418, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 18:14:31,306 DEBUG [Listener at localhost/36435] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 18:14:31,312 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39888, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 18:14:31,314 DEBUG [Listener at localhost/36435] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 18:14:31,316 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45508, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 18:14:31,327 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-21 18:14:31,327 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 18:14:31,328 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_975014563 2023-07-21 18:14:31,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_975014563 2023-07-21 18:14:31,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:31,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_975014563 2023-07-21 18:14:31,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:31,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:31,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_975014563 2023-07-21 18:14:31,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(345): Moving region 3e4301fc2d1085820ec6d1b52321eb4b to RSGroup Group_testTableMoveTruncateAndDrop_975014563 2023-07-21 18:14:31,345 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 18:14:31,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 18:14:31,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 18:14:31,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 18:14:31,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 18:14:31,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3e4301fc2d1085820ec6d1b52321eb4b, REOPEN/MOVE 2023-07-21 18:14:31,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(345): Moving region b48f4ed9966e226aeaa1fd0b0344705b to RSGroup Group_testTableMoveTruncateAndDrop_975014563 2023-07-21 18:14:31,348 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3e4301fc2d1085820ec6d1b52321eb4b, REOPEN/MOVE 2023-07-21 18:14:31,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 18:14:31,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 18:14:31,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 18:14:31,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 18:14:31,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 18:14:31,350 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=3e4301fc2d1085820ec6d1b52321eb4b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:31,350 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b48f4ed9966e226aeaa1fd0b0344705b, REOPEN/MOVE 2023-07-21 18:14:31,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(345): Moving region e172c28c803be39cd8711ba6958b193d to RSGroup Group_testTableMoveTruncateAndDrop_975014563 2023-07-21 18:14:31,351 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b48f4ed9966e226aeaa1fd0b0344705b, REOPEN/MOVE 2023-07-21 18:14:31,351 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 18:14:31,351 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 18:14:31,351 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 18:14:31,351 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 18:14:31,351 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 18:14:31,351 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963271350"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963271350"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963271350"}]},"ts":"1689963271350"} 2023-07-21 18:14:31,352 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=b48f4ed9966e226aeaa1fd0b0344705b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:31,352 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963271352"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963271352"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963271352"}]},"ts":"1689963271352"} 2023-07-21 18:14:31,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e172c28c803be39cd8711ba6958b193d, REOPEN/MOVE 2023-07-21 18:14:31,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(345): Moving region 5eb218bd02142ac1567ed2b3a6712355 to RSGroup Group_testTableMoveTruncateAndDrop_975014563 2023-07-21 18:14:31,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 18:14:31,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 18:14:31,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 18:14:31,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 18:14:31,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 18:14:31,355 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e172c28c803be39cd8711ba6958b193d, REOPEN/MOVE 2023-07-21 18:14:31,355 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=30, ppid=26, state=RUNNABLE; CloseRegionProcedure 3e4301fc2d1085820ec6d1b52321eb4b, server=jenkins-hbase4.apache.org,46437,1689963263715}] 2023-07-21 18:14:31,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5eb218bd02142ac1567ed2b3a6712355, REOPEN/MOVE 2023-07-21 18:14:31,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(345): Moving region c45990923a0173d364e05b38989ad464 to RSGroup Group_testTableMoveTruncateAndDrop_975014563 2023-07-21 18:14:31,356 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=27, state=RUNNABLE; CloseRegionProcedure b48f4ed9966e226aeaa1fd0b0344705b, server=jenkins-hbase4.apache.org,44049,1689963263942}] 2023-07-21 18:14:31,356 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=e172c28c803be39cd8711ba6958b193d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:31,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 18:14:31,356 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963271356"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963271356"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963271356"}]},"ts":"1689963271356"} 2023-07-21 18:14:31,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 18:14:31,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 18:14:31,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 18:14:31,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 18:14:31,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c45990923a0173d364e05b38989ad464, REOPEN/MOVE 2023-07-21 18:14:31,359 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_975014563, current retry=0 2023-07-21 18:14:31,359 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=33, ppid=28, state=RUNNABLE; CloseRegionProcedure e172c28c803be39cd8711ba6958b193d, server=jenkins-hbase4.apache.org,46437,1689963263715}] 2023-07-21 18:14:31,361 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5eb218bd02142ac1567ed2b3a6712355, REOPEN/MOVE 2023-07-21 18:14:31,361 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c45990923a0173d364e05b38989ad464, REOPEN/MOVE 2023-07-21 18:14:31,363 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=5eb218bd02142ac1567ed2b3a6712355, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:31,363 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=c45990923a0173d364e05b38989ad464, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:31,363 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963271363"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963271363"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963271363"}]},"ts":"1689963271363"} 2023-07-21 18:14:31,363 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963271363"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963271363"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963271363"}]},"ts":"1689963271363"} 2023-07-21 18:14:31,366 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=29, state=RUNNABLE; CloseRegionProcedure 5eb218bd02142ac1567ed2b3a6712355, server=jenkins-hbase4.apache.org,44049,1689963263942}] 2023-07-21 18:14:31,368 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=32, state=RUNNABLE; CloseRegionProcedure c45990923a0173d364e05b38989ad464, server=jenkins-hbase4.apache.org,44049,1689963263942}] 2023-07-21 18:14:31,510 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b48f4ed9966e226aeaa1fd0b0344705b 2023-07-21 18:14:31,511 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b48f4ed9966e226aeaa1fd0b0344705b, disabling compactions & flushes 2023-07-21 18:14:31,511 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b. 2023-07-21 18:14:31,511 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b. 2023-07-21 18:14:31,511 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b. after waiting 0 ms 2023-07-21 18:14:31,511 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b. 2023-07-21 18:14:31,513 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e172c28c803be39cd8711ba6958b193d 2023-07-21 18:14:31,514 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e172c28c803be39cd8711ba6958b193d, disabling compactions & flushes 2023-07-21 18:14:31,514 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d. 2023-07-21 18:14:31,514 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d. 2023-07-21 18:14:31,514 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d. after waiting 0 ms 2023-07-21 18:14:31,514 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d. 2023-07-21 18:14:31,525 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/b48f4ed9966e226aeaa1fd0b0344705b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 18:14:31,527 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b. 2023-07-21 18:14:31,527 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b48f4ed9966e226aeaa1fd0b0344705b: 2023-07-21 18:14:31,527 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b48f4ed9966e226aeaa1fd0b0344705b move to jenkins-hbase4.apache.org,43419,1689963263425 record at close sequenceid=2 2023-07-21 18:14:31,528 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/e172c28c803be39cd8711ba6958b193d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 18:14:31,529 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d. 2023-07-21 18:14:31,529 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e172c28c803be39cd8711ba6958b193d: 2023-07-21 18:14:31,530 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding e172c28c803be39cd8711ba6958b193d move to jenkins-hbase4.apache.org,41863,1689963267427 record at close sequenceid=2 2023-07-21 18:14:31,531 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b48f4ed9966e226aeaa1fd0b0344705b 2023-07-21 18:14:31,531 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5eb218bd02142ac1567ed2b3a6712355 2023-07-21 18:14:31,532 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5eb218bd02142ac1567ed2b3a6712355, disabling compactions & flushes 2023-07-21 18:14:31,532 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355. 2023-07-21 18:14:31,532 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355. 2023-07-21 18:14:31,532 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355. after waiting 0 ms 2023-07-21 18:14:31,532 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355. 2023-07-21 18:14:31,532 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=b48f4ed9966e226aeaa1fd0b0344705b, regionState=CLOSED 2023-07-21 18:14:31,533 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963271532"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963271532"}]},"ts":"1689963271532"} 2023-07-21 18:14:31,534 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e172c28c803be39cd8711ba6958b193d 2023-07-21 18:14:31,534 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3e4301fc2d1085820ec6d1b52321eb4b 2023-07-21 18:14:31,534 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3e4301fc2d1085820ec6d1b52321eb4b, disabling compactions & flushes 2023-07-21 18:14:31,534 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b. 2023-07-21 18:14:31,534 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b. 2023-07-21 18:14:31,535 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b. after waiting 0 ms 2023-07-21 18:14:31,535 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b. 2023-07-21 18:14:31,535 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=e172c28c803be39cd8711ba6958b193d, regionState=CLOSED 2023-07-21 18:14:31,535 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963271535"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963271535"}]},"ts":"1689963271535"} 2023-07-21 18:14:31,542 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=27 2023-07-21 18:14:31,542 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=27, state=SUCCESS; CloseRegionProcedure b48f4ed9966e226aeaa1fd0b0344705b, server=jenkins-hbase4.apache.org,44049,1689963263942 in 180 msec 2023-07-21 18:14:31,542 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/5eb218bd02142ac1567ed2b3a6712355/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 18:14:31,544 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355. 2023-07-21 18:14:31,544 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5eb218bd02142ac1567ed2b3a6712355: 2023-07-21 18:14:31,544 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b48f4ed9966e226aeaa1fd0b0344705b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43419,1689963263425; forceNewPlan=false, retain=false 2023-07-21 18:14:31,544 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=28 2023-07-21 18:14:31,544 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 5eb218bd02142ac1567ed2b3a6712355 move to jenkins-hbase4.apache.org,43419,1689963263425 record at close sequenceid=2 2023-07-21 18:14:31,545 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=28, state=SUCCESS; CloseRegionProcedure e172c28c803be39cd8711ba6958b193d, server=jenkins-hbase4.apache.org,46437,1689963263715 in 180 msec 2023-07-21 18:14:31,545 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/3e4301fc2d1085820ec6d1b52321eb4b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 18:14:31,545 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e172c28c803be39cd8711ba6958b193d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41863,1689963267427; forceNewPlan=false, retain=false 2023-07-21 18:14:31,546 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b. 2023-07-21 18:14:31,546 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3e4301fc2d1085820ec6d1b52321eb4b: 2023-07-21 18:14:31,546 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 3e4301fc2d1085820ec6d1b52321eb4b move to jenkins-hbase4.apache.org,43419,1689963263425 record at close sequenceid=2 2023-07-21 18:14:31,547 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5eb218bd02142ac1567ed2b3a6712355 2023-07-21 18:14:31,548 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c45990923a0173d364e05b38989ad464 2023-07-21 18:14:31,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c45990923a0173d364e05b38989ad464, disabling compactions & flushes 2023-07-21 18:14:31,548 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464. 2023-07-21 18:14:31,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464. 2023-07-21 18:14:31,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464. after waiting 0 ms 2023-07-21 18:14:31,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464. 2023-07-21 18:14:31,550 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=5eb218bd02142ac1567ed2b3a6712355, regionState=CLOSED 2023-07-21 18:14:31,550 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963271550"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963271550"}]},"ts":"1689963271550"} 2023-07-21 18:14:31,550 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3e4301fc2d1085820ec6d1b52321eb4b 2023-07-21 18:14:31,552 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=3e4301fc2d1085820ec6d1b52321eb4b, regionState=CLOSED 2023-07-21 18:14:31,552 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963271552"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963271552"}]},"ts":"1689963271552"} 2023-07-21 18:14:31,558 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/c45990923a0173d364e05b38989ad464/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 18:14:31,559 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464. 2023-07-21 18:14:31,559 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c45990923a0173d364e05b38989ad464: 2023-07-21 18:14:31,559 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding c45990923a0173d364e05b38989ad464 move to jenkins-hbase4.apache.org,43419,1689963263425 record at close sequenceid=2 2023-07-21 18:14:31,559 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=29 2023-07-21 18:14:31,560 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=29, state=SUCCESS; CloseRegionProcedure 5eb218bd02142ac1567ed2b3a6712355, server=jenkins-hbase4.apache.org,44049,1689963263942 in 186 msec 2023-07-21 18:14:31,561 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=30, resume processing ppid=26 2023-07-21 18:14:31,561 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5eb218bd02142ac1567ed2b3a6712355, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43419,1689963263425; forceNewPlan=false, retain=false 2023-07-21 18:14:31,561 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=30, ppid=26, state=SUCCESS; CloseRegionProcedure 3e4301fc2d1085820ec6d1b52321eb4b, server=jenkins-hbase4.apache.org,46437,1689963263715 in 202 msec 2023-07-21 18:14:31,561 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c45990923a0173d364e05b38989ad464 2023-07-21 18:14:31,562 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3e4301fc2d1085820ec6d1b52321eb4b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43419,1689963263425; forceNewPlan=false, retain=false 2023-07-21 18:14:31,562 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=c45990923a0173d364e05b38989ad464, regionState=CLOSED 2023-07-21 18:14:31,562 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963271562"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963271562"}]},"ts":"1689963271562"} 2023-07-21 18:14:31,566 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=32 2023-07-21 18:14:31,566 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=32, state=SUCCESS; CloseRegionProcedure c45990923a0173d364e05b38989ad464, server=jenkins-hbase4.apache.org,44049,1689963263942 in 196 msec 2023-07-21 18:14:31,567 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c45990923a0173d364e05b38989ad464, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43419,1689963263425; forceNewPlan=false, retain=false 2023-07-21 18:14:31,695 INFO [jenkins-hbase4:45593] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-21 18:14:31,695 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=5eb218bd02142ac1567ed2b3a6712355, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:31,695 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=c45990923a0173d364e05b38989ad464, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:31,695 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=3e4301fc2d1085820ec6d1b52321eb4b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:31,695 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=e172c28c803be39cd8711ba6958b193d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:31,696 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963271695"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963271695"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963271695"}]},"ts":"1689963271695"} 2023-07-21 18:14:31,695 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=b48f4ed9966e226aeaa1fd0b0344705b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:31,696 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963271695"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963271695"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963271695"}]},"ts":"1689963271695"} 2023-07-21 18:14:31,696 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963271695"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963271695"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963271695"}]},"ts":"1689963271695"} 2023-07-21 18:14:31,696 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963271695"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963271695"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963271695"}]},"ts":"1689963271695"} 2023-07-21 18:14:31,696 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963271695"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963271695"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963271695"}]},"ts":"1689963271695"} 2023-07-21 18:14:31,698 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=32, state=RUNNABLE; OpenRegionProcedure c45990923a0173d364e05b38989ad464, server=jenkins-hbase4.apache.org,43419,1689963263425}] 2023-07-21 18:14:31,700 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=28, state=RUNNABLE; OpenRegionProcedure e172c28c803be39cd8711ba6958b193d, server=jenkins-hbase4.apache.org,41863,1689963267427}] 2023-07-21 18:14:31,701 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=27, state=RUNNABLE; OpenRegionProcedure b48f4ed9966e226aeaa1fd0b0344705b, server=jenkins-hbase4.apache.org,43419,1689963263425}] 2023-07-21 18:14:31,703 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=26, state=RUNNABLE; OpenRegionProcedure 3e4301fc2d1085820ec6d1b52321eb4b, server=jenkins-hbase4.apache.org,43419,1689963263425}] 2023-07-21 18:14:31,705 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=29, state=RUNNABLE; OpenRegionProcedure 5eb218bd02142ac1567ed2b3a6712355, server=jenkins-hbase4.apache.org,43419,1689963263425}] 2023-07-21 18:14:31,794 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 18:14:31,856 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:31,862 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 18:14:31,864 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36302, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 18:14:31,871 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355. 2023-07-21 18:14:31,871 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5eb218bd02142ac1567ed2b3a6712355, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-21 18:14:31,872 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 5eb218bd02142ac1567ed2b3a6712355 2023-07-21 18:14:31,872 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:31,872 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5eb218bd02142ac1567ed2b3a6712355 2023-07-21 18:14:31,872 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5eb218bd02142ac1567ed2b3a6712355 2023-07-21 18:14:31,872 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d. 2023-07-21 18:14:31,873 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e172c28c803be39cd8711ba6958b193d, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-21 18:14:31,873 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e172c28c803be39cd8711ba6958b193d 2023-07-21 18:14:31,873 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:31,873 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e172c28c803be39cd8711ba6958b193d 2023-07-21 18:14:31,873 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e172c28c803be39cd8711ba6958b193d 2023-07-21 18:14:31,879 INFO [StoreOpener-5eb218bd02142ac1567ed2b3a6712355-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5eb218bd02142ac1567ed2b3a6712355 2023-07-21 18:14:31,880 DEBUG [StoreOpener-5eb218bd02142ac1567ed2b3a6712355-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/5eb218bd02142ac1567ed2b3a6712355/f 2023-07-21 18:14:31,880 DEBUG [StoreOpener-5eb218bd02142ac1567ed2b3a6712355-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/5eb218bd02142ac1567ed2b3a6712355/f 2023-07-21 18:14:31,881 INFO [StoreOpener-5eb218bd02142ac1567ed2b3a6712355-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5eb218bd02142ac1567ed2b3a6712355 columnFamilyName f 2023-07-21 18:14:31,881 INFO [StoreOpener-5eb218bd02142ac1567ed2b3a6712355-1] regionserver.HStore(310): Store=5eb218bd02142ac1567ed2b3a6712355/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:31,886 INFO [StoreOpener-e172c28c803be39cd8711ba6958b193d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e172c28c803be39cd8711ba6958b193d 2023-07-21 18:14:31,892 DEBUG [StoreOpener-e172c28c803be39cd8711ba6958b193d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/e172c28c803be39cd8711ba6958b193d/f 2023-07-21 18:14:31,892 DEBUG [StoreOpener-e172c28c803be39cd8711ba6958b193d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/e172c28c803be39cd8711ba6958b193d/f 2023-07-21 18:14:31,893 INFO [StoreOpener-e172c28c803be39cd8711ba6958b193d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e172c28c803be39cd8711ba6958b193d columnFamilyName f 2023-07-21 18:14:31,894 INFO [StoreOpener-e172c28c803be39cd8711ba6958b193d-1] regionserver.HStore(310): Store=e172c28c803be39cd8711ba6958b193d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:31,905 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/e172c28c803be39cd8711ba6958b193d 2023-07-21 18:14:31,905 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/5eb218bd02142ac1567ed2b3a6712355 2023-07-21 18:14:31,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/e172c28c803be39cd8711ba6958b193d 2023-07-21 18:14:31,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/5eb218bd02142ac1567ed2b3a6712355 2023-07-21 18:14:31,921 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e172c28c803be39cd8711ba6958b193d 2023-07-21 18:14:31,922 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e172c28c803be39cd8711ba6958b193d; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9509872000, jitterRate=-0.11432415246963501}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:31,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e172c28c803be39cd8711ba6958b193d: 2023-07-21 18:14:31,923 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5eb218bd02142ac1567ed2b3a6712355 2023-07-21 18:14:31,926 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5eb218bd02142ac1567ed2b3a6712355; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9772024480, jitterRate=-0.08990930020809174}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:31,926 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5eb218bd02142ac1567ed2b3a6712355: 2023-07-21 18:14:31,927 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355., pid=40, masterSystemTime=1689963271855 2023-07-21 18:14:31,931 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d., pid=37, masterSystemTime=1689963271856 2023-07-21 18:14:31,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355. 2023-07-21 18:14:31,939 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355. 2023-07-21 18:14:31,939 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=5eb218bd02142ac1567ed2b3a6712355, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:31,940 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963271939"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963271939"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963271939"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963271939"}]},"ts":"1689963271939"} 2023-07-21 18:14:31,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d. 2023-07-21 18:14:31,941 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d. 2023-07-21 18:14:31,942 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b. 2023-07-21 18:14:31,942 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b48f4ed9966e226aeaa1fd0b0344705b, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-21 18:14:31,943 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=e172c28c803be39cd8711ba6958b193d, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:31,943 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963271943"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963271943"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963271943"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963271943"}]},"ts":"1689963271943"} 2023-07-21 18:14:31,944 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b48f4ed9966e226aeaa1fd0b0344705b 2023-07-21 18:14:31,944 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:31,944 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b48f4ed9966e226aeaa1fd0b0344705b 2023-07-21 18:14:31,944 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b48f4ed9966e226aeaa1fd0b0344705b 2023-07-21 18:14:31,949 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=29 2023-07-21 18:14:31,949 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=29, state=SUCCESS; OpenRegionProcedure 5eb218bd02142ac1567ed2b3a6712355, server=jenkins-hbase4.apache.org,43419,1689963263425 in 238 msec 2023-07-21 18:14:31,949 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=28 2023-07-21 18:14:31,950 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=28, state=SUCCESS; OpenRegionProcedure e172c28c803be39cd8711ba6958b193d, server=jenkins-hbase4.apache.org,41863,1689963267427 in 246 msec 2023-07-21 18:14:31,952 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=29, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5eb218bd02142ac1567ed2b3a6712355, REOPEN/MOVE in 596 msec 2023-07-21 18:14:31,952 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=28, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e172c28c803be39cd8711ba6958b193d, REOPEN/MOVE in 599 msec 2023-07-21 18:14:31,958 INFO [StoreOpener-b48f4ed9966e226aeaa1fd0b0344705b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b48f4ed9966e226aeaa1fd0b0344705b 2023-07-21 18:14:31,971 DEBUG [StoreOpener-b48f4ed9966e226aeaa1fd0b0344705b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/b48f4ed9966e226aeaa1fd0b0344705b/f 2023-07-21 18:14:31,972 DEBUG [StoreOpener-b48f4ed9966e226aeaa1fd0b0344705b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/b48f4ed9966e226aeaa1fd0b0344705b/f 2023-07-21 18:14:31,972 INFO [StoreOpener-b48f4ed9966e226aeaa1fd0b0344705b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b48f4ed9966e226aeaa1fd0b0344705b columnFamilyName f 2023-07-21 18:14:31,973 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-21 18:14:31,974 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testTableMoveTruncateAndDrop' 2023-07-21 18:14:31,976 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-21 18:14:31,977 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 18:14:31,978 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-21 18:14:31,978 INFO [StoreOpener-b48f4ed9966e226aeaa1fd0b0344705b-1] regionserver.HStore(310): Store=b48f4ed9966e226aeaa1fd0b0344705b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:31,979 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 18:14:31,979 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-21 18:14:31,979 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 18:14:31,979 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-21 18:14:31,983 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/b48f4ed9966e226aeaa1fd0b0344705b 2023-07-21 18:14:31,985 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/b48f4ed9966e226aeaa1fd0b0344705b 2023-07-21 18:14:31,990 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b48f4ed9966e226aeaa1fd0b0344705b 2023-07-21 18:14:31,991 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b48f4ed9966e226aeaa1fd0b0344705b; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9783587360, jitterRate=-0.08883242309093475}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:31,991 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b48f4ed9966e226aeaa1fd0b0344705b: 2023-07-21 18:14:31,992 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b., pid=38, masterSystemTime=1689963271855 2023-07-21 18:14:31,995 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b. 2023-07-21 18:14:31,995 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b. 2023-07-21 18:14:31,995 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464. 2023-07-21 18:14:31,995 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c45990923a0173d364e05b38989ad464, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-21 18:14:31,996 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=b48f4ed9966e226aeaa1fd0b0344705b, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:31,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop c45990923a0173d364e05b38989ad464 2023-07-21 18:14:31,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:31,996 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963271996"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963271996"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963271996"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963271996"}]},"ts":"1689963271996"} 2023-07-21 18:14:31,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c45990923a0173d364e05b38989ad464 2023-07-21 18:14:31,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c45990923a0173d364e05b38989ad464 2023-07-21 18:14:32,002 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=27 2023-07-21 18:14:32,002 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=27, state=SUCCESS; OpenRegionProcedure b48f4ed9966e226aeaa1fd0b0344705b, server=jenkins-hbase4.apache.org,43419,1689963263425 in 298 msec 2023-07-21 18:14:32,004 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b48f4ed9966e226aeaa1fd0b0344705b, REOPEN/MOVE in 653 msec 2023-07-21 18:14:32,011 INFO [StoreOpener-c45990923a0173d364e05b38989ad464-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c45990923a0173d364e05b38989ad464 2023-07-21 18:14:32,012 DEBUG [StoreOpener-c45990923a0173d364e05b38989ad464-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/c45990923a0173d364e05b38989ad464/f 2023-07-21 18:14:32,012 DEBUG [StoreOpener-c45990923a0173d364e05b38989ad464-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/c45990923a0173d364e05b38989ad464/f 2023-07-21 18:14:32,013 INFO [StoreOpener-c45990923a0173d364e05b38989ad464-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c45990923a0173d364e05b38989ad464 columnFamilyName f 2023-07-21 18:14:32,014 INFO [StoreOpener-c45990923a0173d364e05b38989ad464-1] regionserver.HStore(310): Store=c45990923a0173d364e05b38989ad464/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:32,015 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/c45990923a0173d364e05b38989ad464 2023-07-21 18:14:32,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/c45990923a0173d364e05b38989ad464 2023-07-21 18:14:32,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c45990923a0173d364e05b38989ad464 2023-07-21 18:14:32,021 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c45990923a0173d364e05b38989ad464; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10879434080, jitterRate=0.013226255774497986}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:32,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c45990923a0173d364e05b38989ad464: 2023-07-21 18:14:32,022 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464., pid=36, masterSystemTime=1689963271855 2023-07-21 18:14:32,024 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464. 2023-07-21 18:14:32,024 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464. 2023-07-21 18:14:32,025 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b. 2023-07-21 18:14:32,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3e4301fc2d1085820ec6d1b52321eb4b, NAME => 'Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-21 18:14:32,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 3e4301fc2d1085820ec6d1b52321eb4b 2023-07-21 18:14:32,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:32,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3e4301fc2d1085820ec6d1b52321eb4b 2023-07-21 18:14:32,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3e4301fc2d1085820ec6d1b52321eb4b 2023-07-21 18:14:32,030 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=c45990923a0173d364e05b38989ad464, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:32,030 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963272029"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963272029"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963272029"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963272029"}]},"ts":"1689963272029"} 2023-07-21 18:14:32,030 INFO [StoreOpener-3e4301fc2d1085820ec6d1b52321eb4b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3e4301fc2d1085820ec6d1b52321eb4b 2023-07-21 18:14:32,033 DEBUG [StoreOpener-3e4301fc2d1085820ec6d1b52321eb4b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/3e4301fc2d1085820ec6d1b52321eb4b/f 2023-07-21 18:14:32,033 DEBUG [StoreOpener-3e4301fc2d1085820ec6d1b52321eb4b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/3e4301fc2d1085820ec6d1b52321eb4b/f 2023-07-21 18:14:32,033 INFO [StoreOpener-3e4301fc2d1085820ec6d1b52321eb4b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3e4301fc2d1085820ec6d1b52321eb4b columnFamilyName f 2023-07-21 18:14:32,034 INFO [StoreOpener-3e4301fc2d1085820ec6d1b52321eb4b-1] regionserver.HStore(310): Store=3e4301fc2d1085820ec6d1b52321eb4b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:32,035 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/3e4301fc2d1085820ec6d1b52321eb4b 2023-07-21 18:14:32,037 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/3e4301fc2d1085820ec6d1b52321eb4b 2023-07-21 18:14:32,038 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=32 2023-07-21 18:14:32,038 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=32, state=SUCCESS; OpenRegionProcedure c45990923a0173d364e05b38989ad464, server=jenkins-hbase4.apache.org,43419,1689963263425 in 336 msec 2023-07-21 18:14:32,040 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=32, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c45990923a0173d364e05b38989ad464, REOPEN/MOVE in 681 msec 2023-07-21 18:14:32,042 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3e4301fc2d1085820ec6d1b52321eb4b 2023-07-21 18:14:32,044 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3e4301fc2d1085820ec6d1b52321eb4b; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10338703840, jitterRate=-0.037133172154426575}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:32,044 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3e4301fc2d1085820ec6d1b52321eb4b: 2023-07-21 18:14:32,045 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b., pid=39, masterSystemTime=1689963271855 2023-07-21 18:14:32,047 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b. 2023-07-21 18:14:32,047 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b. 2023-07-21 18:14:32,048 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=3e4301fc2d1085820ec6d1b52321eb4b, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:32,048 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963272048"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963272048"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963272048"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963272048"}]},"ts":"1689963272048"} 2023-07-21 18:14:32,053 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=26 2023-07-21 18:14:32,053 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=26, state=SUCCESS; OpenRegionProcedure 3e4301fc2d1085820ec6d1b52321eb4b, server=jenkins-hbase4.apache.org,43419,1689963263425 in 348 msec 2023-07-21 18:14:32,056 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=26, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3e4301fc2d1085820ec6d1b52321eb4b, REOPEN/MOVE in 707 msec 2023-07-21 18:14:32,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure.ProcedureSyncWait(216): waitFor pid=26 2023-07-21 18:14:32,359 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_975014563. 2023-07-21 18:14:32,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:32,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:32,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:32,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-21 18:14:32,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 18:14:32,369 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:32,376 INFO [Listener at localhost/36435] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-21 18:14:32,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-21 18:14:32,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=41, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 18:14:32,395 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963272395"}]},"ts":"1689963272395"} 2023-07-21 18:14:32,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-21 18:14:32,397 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-21 18:14:32,399 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-21 18:14:32,402 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3e4301fc2d1085820ec6d1b52321eb4b, UNASSIGN}, {pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b48f4ed9966e226aeaa1fd0b0344705b, UNASSIGN}, {pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e172c28c803be39cd8711ba6958b193d, UNASSIGN}, {pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5eb218bd02142ac1567ed2b3a6712355, UNASSIGN}, {pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c45990923a0173d364e05b38989ad464, UNASSIGN}] 2023-07-21 18:14:32,408 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3e4301fc2d1085820ec6d1b52321eb4b, UNASSIGN 2023-07-21 18:14:32,409 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5eb218bd02142ac1567ed2b3a6712355, UNASSIGN 2023-07-21 18:14:32,409 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e172c28c803be39cd8711ba6958b193d, UNASSIGN 2023-07-21 18:14:32,410 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b48f4ed9966e226aeaa1fd0b0344705b, UNASSIGN 2023-07-21 18:14:32,410 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c45990923a0173d364e05b38989ad464, UNASSIGN 2023-07-21 18:14:32,412 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=3e4301fc2d1085820ec6d1b52321eb4b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:32,412 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963272412"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963272412"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963272412"}]},"ts":"1689963272412"} 2023-07-21 18:14:32,413 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=c45990923a0173d364e05b38989ad464, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:32,413 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963272413"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963272413"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963272413"}]},"ts":"1689963272413"} 2023-07-21 18:14:32,413 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=e172c28c803be39cd8711ba6958b193d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:32,413 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=b48f4ed9966e226aeaa1fd0b0344705b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:32,414 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963272413"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963272413"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963272413"}]},"ts":"1689963272413"} 2023-07-21 18:14:32,414 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963272413"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963272413"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963272413"}]},"ts":"1689963272413"} 2023-07-21 18:14:32,414 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=5eb218bd02142ac1567ed2b3a6712355, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:32,414 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963272413"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963272413"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963272413"}]},"ts":"1689963272413"} 2023-07-21 18:14:32,415 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=42, state=RUNNABLE; CloseRegionProcedure 3e4301fc2d1085820ec6d1b52321eb4b, server=jenkins-hbase4.apache.org,43419,1689963263425}] 2023-07-21 18:14:32,416 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=46, state=RUNNABLE; CloseRegionProcedure c45990923a0173d364e05b38989ad464, server=jenkins-hbase4.apache.org,43419,1689963263425}] 2023-07-21 18:14:32,417 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=49, ppid=44, state=RUNNABLE; CloseRegionProcedure e172c28c803be39cd8711ba6958b193d, server=jenkins-hbase4.apache.org,41863,1689963267427}] 2023-07-21 18:14:32,419 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=43, state=RUNNABLE; CloseRegionProcedure b48f4ed9966e226aeaa1fd0b0344705b, server=jenkins-hbase4.apache.org,43419,1689963263425}] 2023-07-21 18:14:32,421 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=51, ppid=45, state=RUNNABLE; CloseRegionProcedure 5eb218bd02142ac1567ed2b3a6712355, server=jenkins-hbase4.apache.org,43419,1689963263425}] 2023-07-21 18:14:32,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-21 18:14:32,570 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b48f4ed9966e226aeaa1fd0b0344705b 2023-07-21 18:14:32,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b48f4ed9966e226aeaa1fd0b0344705b, disabling compactions & flushes 2023-07-21 18:14:32,571 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b. 2023-07-21 18:14:32,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b. 2023-07-21 18:14:32,572 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b. after waiting 0 ms 2023-07-21 18:14:32,572 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b. 2023-07-21 18:14:32,572 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e172c28c803be39cd8711ba6958b193d 2023-07-21 18:14:32,573 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e172c28c803be39cd8711ba6958b193d, disabling compactions & flushes 2023-07-21 18:14:32,573 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d. 2023-07-21 18:14:32,573 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d. 2023-07-21 18:14:32,573 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d. after waiting 0 ms 2023-07-21 18:14:32,573 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d. 2023-07-21 18:14:32,579 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/b48f4ed9966e226aeaa1fd0b0344705b/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 18:14:32,579 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/e172c28c803be39cd8711ba6958b193d/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 18:14:32,580 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b. 2023-07-21 18:14:32,580 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b48f4ed9966e226aeaa1fd0b0344705b: 2023-07-21 18:14:32,581 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d. 2023-07-21 18:14:32,581 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e172c28c803be39cd8711ba6958b193d: 2023-07-21 18:14:32,583 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b48f4ed9966e226aeaa1fd0b0344705b 2023-07-21 18:14:32,583 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c45990923a0173d364e05b38989ad464 2023-07-21 18:14:32,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c45990923a0173d364e05b38989ad464, disabling compactions & flushes 2023-07-21 18:14:32,584 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464. 2023-07-21 18:14:32,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464. 2023-07-21 18:14:32,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464. after waiting 0 ms 2023-07-21 18:14:32,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464. 2023-07-21 18:14:32,584 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=b48f4ed9966e226aeaa1fd0b0344705b, regionState=CLOSED 2023-07-21 18:14:32,585 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963272584"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963272584"}]},"ts":"1689963272584"} 2023-07-21 18:14:32,585 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e172c28c803be39cd8711ba6958b193d 2023-07-21 18:14:32,586 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=e172c28c803be39cd8711ba6958b193d, regionState=CLOSED 2023-07-21 18:14:32,586 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963272586"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963272586"}]},"ts":"1689963272586"} 2023-07-21 18:14:32,592 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=43 2023-07-21 18:14:32,592 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=43, state=SUCCESS; CloseRegionProcedure b48f4ed9966e226aeaa1fd0b0344705b, server=jenkins-hbase4.apache.org,43419,1689963263425 in 169 msec 2023-07-21 18:14:32,595 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/c45990923a0173d364e05b38989ad464/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 18:14:32,596 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464. 2023-07-21 18:14:32,596 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c45990923a0173d364e05b38989ad464: 2023-07-21 18:14:32,596 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=49, resume processing ppid=44 2023-07-21 18:14:32,597 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=44, state=SUCCESS; CloseRegionProcedure e172c28c803be39cd8711ba6958b193d, server=jenkins-hbase4.apache.org,41863,1689963267427 in 172 msec 2023-07-21 18:14:32,597 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b48f4ed9966e226aeaa1fd0b0344705b, UNASSIGN in 190 msec 2023-07-21 18:14:32,599 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c45990923a0173d364e05b38989ad464 2023-07-21 18:14:32,599 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e172c28c803be39cd8711ba6958b193d, UNASSIGN in 194 msec 2023-07-21 18:14:32,599 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=c45990923a0173d364e05b38989ad464, regionState=CLOSED 2023-07-21 18:14:32,599 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963272599"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963272599"}]},"ts":"1689963272599"} 2023-07-21 18:14:32,599 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3e4301fc2d1085820ec6d1b52321eb4b 2023-07-21 18:14:32,602 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3e4301fc2d1085820ec6d1b52321eb4b, disabling compactions & flushes 2023-07-21 18:14:32,602 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b. 2023-07-21 18:14:32,602 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b. 2023-07-21 18:14:32,602 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b. after waiting 0 ms 2023-07-21 18:14:32,602 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b. 2023-07-21 18:14:32,609 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=46 2023-07-21 18:14:32,609 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=46, state=SUCCESS; CloseRegionProcedure c45990923a0173d364e05b38989ad464, server=jenkins-hbase4.apache.org,43419,1689963263425 in 187 msec 2023-07-21 18:14:32,611 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c45990923a0173d364e05b38989ad464, UNASSIGN in 207 msec 2023-07-21 18:14:32,619 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/3e4301fc2d1085820ec6d1b52321eb4b/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 18:14:32,620 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b. 2023-07-21 18:14:32,620 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3e4301fc2d1085820ec6d1b52321eb4b: 2023-07-21 18:14:32,622 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3e4301fc2d1085820ec6d1b52321eb4b 2023-07-21 18:14:32,623 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5eb218bd02142ac1567ed2b3a6712355 2023-07-21 18:14:32,623 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5eb218bd02142ac1567ed2b3a6712355, disabling compactions & flushes 2023-07-21 18:14:32,624 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355. 2023-07-21 18:14:32,624 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355. 2023-07-21 18:14:32,624 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355. after waiting 0 ms 2023-07-21 18:14:32,624 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355. 2023-07-21 18:14:32,624 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=3e4301fc2d1085820ec6d1b52321eb4b, regionState=CLOSED 2023-07-21 18:14:32,624 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963272624"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963272624"}]},"ts":"1689963272624"} 2023-07-21 18:14:32,629 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=42 2023-07-21 18:14:32,629 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=42, state=SUCCESS; CloseRegionProcedure 3e4301fc2d1085820ec6d1b52321eb4b, server=jenkins-hbase4.apache.org,43419,1689963263425 in 212 msec 2023-07-21 18:14:32,631 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3e4301fc2d1085820ec6d1b52321eb4b, UNASSIGN in 227 msec 2023-07-21 18:14:32,632 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/5eb218bd02142ac1567ed2b3a6712355/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 18:14:32,634 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355. 2023-07-21 18:14:32,634 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5eb218bd02142ac1567ed2b3a6712355: 2023-07-21 18:14:32,636 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5eb218bd02142ac1567ed2b3a6712355 2023-07-21 18:14:32,637 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=5eb218bd02142ac1567ed2b3a6712355, regionState=CLOSED 2023-07-21 18:14:32,637 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963272637"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963272637"}]},"ts":"1689963272637"} 2023-07-21 18:14:32,650 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=45 2023-07-21 18:14:32,650 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=45, state=SUCCESS; CloseRegionProcedure 5eb218bd02142ac1567ed2b3a6712355, server=jenkins-hbase4.apache.org,43419,1689963263425 in 222 msec 2023-07-21 18:14:32,653 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=41 2023-07-21 18:14:32,654 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5eb218bd02142ac1567ed2b3a6712355, UNASSIGN in 248 msec 2023-07-21 18:14:32,655 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963272655"}]},"ts":"1689963272655"} 2023-07-21 18:14:32,657 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-21 18:14:32,659 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-21 18:14:32,663 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=41, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 276 msec 2023-07-21 18:14:32,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-21 18:14:32,700 INFO [Listener at localhost/36435] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 41 completed 2023-07-21 18:14:32,701 INFO [Listener at localhost/36435] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-21 18:14:32,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-21 18:14:32,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=52, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-21 18:14:32,718 DEBUG [PEWorker-5] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-21 18:14:32,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-21 18:14:32,730 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b48f4ed9966e226aeaa1fd0b0344705b 2023-07-21 18:14:32,730 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c45990923a0173d364e05b38989ad464 2023-07-21 18:14:32,730 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5eb218bd02142ac1567ed2b3a6712355 2023-07-21 18:14:32,730 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e172c28c803be39cd8711ba6958b193d 2023-07-21 18:14:32,730 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3e4301fc2d1085820ec6d1b52321eb4b 2023-07-21 18:14:32,736 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5eb218bd02142ac1567ed2b3a6712355/f, FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5eb218bd02142ac1567ed2b3a6712355/recovered.edits] 2023-07-21 18:14:32,736 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b48f4ed9966e226aeaa1fd0b0344705b/f, FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b48f4ed9966e226aeaa1fd0b0344705b/recovered.edits] 2023-07-21 18:14:32,736 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c45990923a0173d364e05b38989ad464/f, FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c45990923a0173d364e05b38989ad464/recovered.edits] 2023-07-21 18:14:32,736 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e172c28c803be39cd8711ba6958b193d/f, FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e172c28c803be39cd8711ba6958b193d/recovered.edits] 2023-07-21 18:14:32,737 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3e4301fc2d1085820ec6d1b52321eb4b/f, FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3e4301fc2d1085820ec6d1b52321eb4b/recovered.edits] 2023-07-21 18:14:32,754 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e172c28c803be39cd8711ba6958b193d/recovered.edits/7.seqid to hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/archive/data/default/Group_testTableMoveTruncateAndDrop/e172c28c803be39cd8711ba6958b193d/recovered.edits/7.seqid 2023-07-21 18:14:32,754 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b48f4ed9966e226aeaa1fd0b0344705b/recovered.edits/7.seqid to hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/archive/data/default/Group_testTableMoveTruncateAndDrop/b48f4ed9966e226aeaa1fd0b0344705b/recovered.edits/7.seqid 2023-07-21 18:14:32,755 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c45990923a0173d364e05b38989ad464/recovered.edits/7.seqid to hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/archive/data/default/Group_testTableMoveTruncateAndDrop/c45990923a0173d364e05b38989ad464/recovered.edits/7.seqid 2023-07-21 18:14:32,755 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5eb218bd02142ac1567ed2b3a6712355/recovered.edits/7.seqid to hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/archive/data/default/Group_testTableMoveTruncateAndDrop/5eb218bd02142ac1567ed2b3a6712355/recovered.edits/7.seqid 2023-07-21 18:14:32,756 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e172c28c803be39cd8711ba6958b193d 2023-07-21 18:14:32,756 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b48f4ed9966e226aeaa1fd0b0344705b 2023-07-21 18:14:32,757 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3e4301fc2d1085820ec6d1b52321eb4b/recovered.edits/7.seqid to hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/archive/data/default/Group_testTableMoveTruncateAndDrop/3e4301fc2d1085820ec6d1b52321eb4b/recovered.edits/7.seqid 2023-07-21 18:14:32,757 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5eb218bd02142ac1567ed2b3a6712355 2023-07-21 18:14:32,758 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c45990923a0173d364e05b38989ad464 2023-07-21 18:14:32,758 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3e4301fc2d1085820ec6d1b52321eb4b 2023-07-21 18:14:32,758 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-21 18:14:32,786 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-21 18:14:32,790 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-21 18:14:32,791 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-21 18:14:32,792 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689963272791"}]},"ts":"9223372036854775807"} 2023-07-21 18:14:32,792 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689963272791"}]},"ts":"9223372036854775807"} 2023-07-21 18:14:32,792 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689963272791"}]},"ts":"9223372036854775807"} 2023-07-21 18:14:32,792 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689963272791"}]},"ts":"9223372036854775807"} 2023-07-21 18:14:32,792 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689963272791"}]},"ts":"9223372036854775807"} 2023-07-21 18:14:32,795 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-21 18:14:32,795 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 3e4301fc2d1085820ec6d1b52321eb4b, NAME => 'Group_testTableMoveTruncateAndDrop,,1689963268977.3e4301fc2d1085820ec6d1b52321eb4b.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => b48f4ed9966e226aeaa1fd0b0344705b, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689963268977.b48f4ed9966e226aeaa1fd0b0344705b.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => e172c28c803be39cd8711ba6958b193d, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963268977.e172c28c803be39cd8711ba6958b193d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 5eb218bd02142ac1567ed2b3a6712355, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963268977.5eb218bd02142ac1567ed2b3a6712355.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => c45990923a0173d364e05b38989ad464, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689963268977.c45990923a0173d364e05b38989ad464.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-21 18:14:32,795 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-21 18:14:32,796 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689963272796"}]},"ts":"9223372036854775807"} 2023-07-21 18:14:32,798 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-21 18:14:32,807 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b55aa793676653f77b02d9e6920f49a4 2023-07-21 18:14:32,807 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/60a36ac1167cc97e792f0d288a8abba2 2023-07-21 18:14:32,807 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fbf1a282059815bd5989ff24e5005534 2023-07-21 18:14:32,807 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cf2d2f1ccb766fac970ec8cd2e6f3204 2023-07-21 18:14:32,807 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/37d0e541d149bcfeb77fc44839acfc02 2023-07-21 18:14:32,808 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b55aa793676653f77b02d9e6920f49a4 empty. 2023-07-21 18:14:32,809 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/60a36ac1167cc97e792f0d288a8abba2 empty. 2023-07-21 18:14:32,809 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fbf1a282059815bd5989ff24e5005534 empty. 2023-07-21 18:14:32,809 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/37d0e541d149bcfeb77fc44839acfc02 empty. 2023-07-21 18:14:32,809 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cf2d2f1ccb766fac970ec8cd2e6f3204 empty. 2023-07-21 18:14:32,810 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b55aa793676653f77b02d9e6920f49a4 2023-07-21 18:14:32,810 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/60a36ac1167cc97e792f0d288a8abba2 2023-07-21 18:14:32,810 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fbf1a282059815bd5989ff24e5005534 2023-07-21 18:14:32,810 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cf2d2f1ccb766fac970ec8cd2e6f3204 2023-07-21 18:14:32,810 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/37d0e541d149bcfeb77fc44839acfc02 2023-07-21 18:14:32,810 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-21 18:14:32,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-21 18:14:32,836 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-21 18:14:32,839 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => b55aa793676653f77b02d9e6920f49a4, NAME => 'Group_testTableMoveTruncateAndDrop,,1689963272760.b55aa793676653f77b02d9e6920f49a4.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp 2023-07-21 18:14:32,840 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 37d0e541d149bcfeb77fc44839acfc02, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689963272760.37d0e541d149bcfeb77fc44839acfc02.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp 2023-07-21 18:14:32,847 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => cf2d2f1ccb766fac970ec8cd2e6f3204, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963272760.cf2d2f1ccb766fac970ec8cd2e6f3204.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp 2023-07-21 18:14:32,896 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689963272760.37d0e541d149bcfeb77fc44839acfc02.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:32,896 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 37d0e541d149bcfeb77fc44839acfc02, disabling compactions & flushes 2023-07-21 18:14:32,896 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689963272760.37d0e541d149bcfeb77fc44839acfc02. 2023-07-21 18:14:32,897 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689963272760.37d0e541d149bcfeb77fc44839acfc02. 2023-07-21 18:14:32,897 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689963272760.37d0e541d149bcfeb77fc44839acfc02. after waiting 0 ms 2023-07-21 18:14:32,897 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689963272760.37d0e541d149bcfeb77fc44839acfc02. 2023-07-21 18:14:32,897 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689963272760.37d0e541d149bcfeb77fc44839acfc02. 2023-07-21 18:14:32,897 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 37d0e541d149bcfeb77fc44839acfc02: 2023-07-21 18:14:32,897 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 60a36ac1167cc97e792f0d288a8abba2, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963272761.60a36ac1167cc97e792f0d288a8abba2.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp 2023-07-21 18:14:32,905 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689963272760.b55aa793676653f77b02d9e6920f49a4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:32,905 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing b55aa793676653f77b02d9e6920f49a4, disabling compactions & flushes 2023-07-21 18:14:32,905 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689963272760.b55aa793676653f77b02d9e6920f49a4. 2023-07-21 18:14:32,905 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689963272760.b55aa793676653f77b02d9e6920f49a4. 2023-07-21 18:14:32,906 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689963272760.b55aa793676653f77b02d9e6920f49a4. after waiting 0 ms 2023-07-21 18:14:32,906 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689963272760.b55aa793676653f77b02d9e6920f49a4. 2023-07-21 18:14:32,906 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689963272760.b55aa793676653f77b02d9e6920f49a4. 2023-07-21 18:14:32,906 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for b55aa793676653f77b02d9e6920f49a4: 2023-07-21 18:14:32,906 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => fbf1a282059815bd5989ff24e5005534, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689963272761.fbf1a282059815bd5989ff24e5005534.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp 2023-07-21 18:14:32,909 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963272760.cf2d2f1ccb766fac970ec8cd2e6f3204.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:32,910 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing cf2d2f1ccb766fac970ec8cd2e6f3204, disabling compactions & flushes 2023-07-21 18:14:32,910 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963272760.cf2d2f1ccb766fac970ec8cd2e6f3204. 2023-07-21 18:14:32,910 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963272760.cf2d2f1ccb766fac970ec8cd2e6f3204. 2023-07-21 18:14:32,910 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963272760.cf2d2f1ccb766fac970ec8cd2e6f3204. after waiting 0 ms 2023-07-21 18:14:32,910 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963272760.cf2d2f1ccb766fac970ec8cd2e6f3204. 2023-07-21 18:14:32,910 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963272760.cf2d2f1ccb766fac970ec8cd2e6f3204. 2023-07-21 18:14:32,910 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for cf2d2f1ccb766fac970ec8cd2e6f3204: 2023-07-21 18:14:32,923 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963272761.60a36ac1167cc97e792f0d288a8abba2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:32,923 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 60a36ac1167cc97e792f0d288a8abba2, disabling compactions & flushes 2023-07-21 18:14:32,923 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963272761.60a36ac1167cc97e792f0d288a8abba2. 2023-07-21 18:14:32,923 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963272761.60a36ac1167cc97e792f0d288a8abba2. 2023-07-21 18:14:32,923 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963272761.60a36ac1167cc97e792f0d288a8abba2. after waiting 0 ms 2023-07-21 18:14:32,923 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963272761.60a36ac1167cc97e792f0d288a8abba2. 2023-07-21 18:14:32,923 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963272761.60a36ac1167cc97e792f0d288a8abba2. 2023-07-21 18:14:32,924 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 60a36ac1167cc97e792f0d288a8abba2: 2023-07-21 18:14:32,932 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689963272761.fbf1a282059815bd5989ff24e5005534.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:32,932 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing fbf1a282059815bd5989ff24e5005534, disabling compactions & flushes 2023-07-21 18:14:32,932 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689963272761.fbf1a282059815bd5989ff24e5005534. 2023-07-21 18:14:32,932 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689963272761.fbf1a282059815bd5989ff24e5005534. 2023-07-21 18:14:32,932 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689963272761.fbf1a282059815bd5989ff24e5005534. after waiting 0 ms 2023-07-21 18:14:32,932 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689963272761.fbf1a282059815bd5989ff24e5005534. 2023-07-21 18:14:32,932 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689963272761.fbf1a282059815bd5989ff24e5005534. 2023-07-21 18:14:32,932 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for fbf1a282059815bd5989ff24e5005534: 2023-07-21 18:14:32,937 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689963272760.37d0e541d149bcfeb77fc44839acfc02.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963272937"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963272937"}]},"ts":"1689963272937"} 2023-07-21 18:14:32,937 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689963272760.b55aa793676653f77b02d9e6920f49a4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963272937"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963272937"}]},"ts":"1689963272937"} 2023-07-21 18:14:32,937 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689963272760.cf2d2f1ccb766fac970ec8cd2e6f3204.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963272937"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963272937"}]},"ts":"1689963272937"} 2023-07-21 18:14:32,937 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689963272761.60a36ac1167cc97e792f0d288a8abba2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963272937"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963272937"}]},"ts":"1689963272937"} 2023-07-21 18:14:32,937 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689963272761.fbf1a282059815bd5989ff24e5005534.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963272937"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963272937"}]},"ts":"1689963272937"} 2023-07-21 18:14:32,940 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-21 18:14:32,942 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963272941"}]},"ts":"1689963272941"} 2023-07-21 18:14:32,944 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-21 18:14:32,948 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 18:14:32,948 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 18:14:32,948 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 18:14:32,949 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 18:14:32,952 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b55aa793676653f77b02d9e6920f49a4, ASSIGN}, {pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=37d0e541d149bcfeb77fc44839acfc02, ASSIGN}, {pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cf2d2f1ccb766fac970ec8cd2e6f3204, ASSIGN}, {pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=60a36ac1167cc97e792f0d288a8abba2, ASSIGN}, {pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fbf1a282059815bd5989ff24e5005534, ASSIGN}] 2023-07-21 18:14:32,954 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cf2d2f1ccb766fac970ec8cd2e6f3204, ASSIGN 2023-07-21 18:14:32,954 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=37d0e541d149bcfeb77fc44839acfc02, ASSIGN 2023-07-21 18:14:32,954 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b55aa793676653f77b02d9e6920f49a4, ASSIGN 2023-07-21 18:14:32,954 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=60a36ac1167cc97e792f0d288a8abba2, ASSIGN 2023-07-21 18:14:32,955 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fbf1a282059815bd5989ff24e5005534, ASSIGN 2023-07-21 18:14:32,955 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cf2d2f1ccb766fac970ec8cd2e6f3204, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41863,1689963267427; forceNewPlan=false, retain=false 2023-07-21 18:14:32,955 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=37d0e541d149bcfeb77fc44839acfc02, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41863,1689963267427; forceNewPlan=false, retain=false 2023-07-21 18:14:32,955 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b55aa793676653f77b02d9e6920f49a4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43419,1689963263425; forceNewPlan=false, retain=false 2023-07-21 18:14:32,956 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=60a36ac1167cc97e792f0d288a8abba2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43419,1689963263425; forceNewPlan=false, retain=false 2023-07-21 18:14:32,956 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fbf1a282059815bd5989ff24e5005534, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41863,1689963267427; forceNewPlan=false, retain=false 2023-07-21 18:14:33,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-21 18:14:33,106 INFO [jenkins-hbase4:45593] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-21 18:14:33,111 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=60a36ac1167cc97e792f0d288a8abba2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:33,111 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=fbf1a282059815bd5989ff24e5005534, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:33,111 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689963272761.60a36ac1167cc97e792f0d288a8abba2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963273111"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963273111"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963273111"}]},"ts":"1689963273111"} 2023-07-21 18:14:33,111 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689963272761.fbf1a282059815bd5989ff24e5005534.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963273111"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963273111"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963273111"}]},"ts":"1689963273111"} 2023-07-21 18:14:33,111 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=cf2d2f1ccb766fac970ec8cd2e6f3204, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:33,111 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=b55aa793676653f77b02d9e6920f49a4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:33,112 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689963272760.cf2d2f1ccb766fac970ec8cd2e6f3204.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963273111"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963273111"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963273111"}]},"ts":"1689963273111"} 2023-07-21 18:14:33,112 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689963272760.b55aa793676653f77b02d9e6920f49a4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963273111"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963273111"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963273111"}]},"ts":"1689963273111"} 2023-07-21 18:14:33,111 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=37d0e541d149bcfeb77fc44839acfc02, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:33,112 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689963272760.37d0e541d149bcfeb77fc44839acfc02.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963273111"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963273111"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963273111"}]},"ts":"1689963273111"} 2023-07-21 18:14:33,114 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=56, state=RUNNABLE; OpenRegionProcedure 60a36ac1167cc97e792f0d288a8abba2, server=jenkins-hbase4.apache.org,43419,1689963263425}] 2023-07-21 18:14:33,115 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=57, state=RUNNABLE; OpenRegionProcedure fbf1a282059815bd5989ff24e5005534, server=jenkins-hbase4.apache.org,41863,1689963267427}] 2023-07-21 18:14:33,117 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=60, ppid=55, state=RUNNABLE; OpenRegionProcedure cf2d2f1ccb766fac970ec8cd2e6f3204, server=jenkins-hbase4.apache.org,41863,1689963267427}] 2023-07-21 18:14:33,118 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=53, state=RUNNABLE; OpenRegionProcedure b55aa793676653f77b02d9e6920f49a4, server=jenkins-hbase4.apache.org,43419,1689963263425}] 2023-07-21 18:14:33,121 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=54, state=RUNNABLE; OpenRegionProcedure 37d0e541d149bcfeb77fc44839acfc02, server=jenkins-hbase4.apache.org,41863,1689963267427}] 2023-07-21 18:14:33,272 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689963272760.b55aa793676653f77b02d9e6920f49a4. 2023-07-21 18:14:33,272 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b55aa793676653f77b02d9e6920f49a4, NAME => 'Group_testTableMoveTruncateAndDrop,,1689963272760.b55aa793676653f77b02d9e6920f49a4.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-21 18:14:33,272 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b55aa793676653f77b02d9e6920f49a4 2023-07-21 18:14:33,272 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689963272760.b55aa793676653f77b02d9e6920f49a4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:33,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b55aa793676653f77b02d9e6920f49a4 2023-07-21 18:14:33,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b55aa793676653f77b02d9e6920f49a4 2023-07-21 18:14:33,273 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689963272760.37d0e541d149bcfeb77fc44839acfc02. 2023-07-21 18:14:33,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 37d0e541d149bcfeb77fc44839acfc02, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689963272760.37d0e541d149bcfeb77fc44839acfc02.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-21 18:14:33,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 37d0e541d149bcfeb77fc44839acfc02 2023-07-21 18:14:33,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689963272760.37d0e541d149bcfeb77fc44839acfc02.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:33,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 37d0e541d149bcfeb77fc44839acfc02 2023-07-21 18:14:33,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 37d0e541d149bcfeb77fc44839acfc02 2023-07-21 18:14:33,274 INFO [StoreOpener-b55aa793676653f77b02d9e6920f49a4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b55aa793676653f77b02d9e6920f49a4 2023-07-21 18:14:33,275 INFO [StoreOpener-37d0e541d149bcfeb77fc44839acfc02-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 37d0e541d149bcfeb77fc44839acfc02 2023-07-21 18:14:33,276 DEBUG [StoreOpener-b55aa793676653f77b02d9e6920f49a4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/b55aa793676653f77b02d9e6920f49a4/f 2023-07-21 18:14:33,276 DEBUG [StoreOpener-b55aa793676653f77b02d9e6920f49a4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/b55aa793676653f77b02d9e6920f49a4/f 2023-07-21 18:14:33,277 INFO [StoreOpener-b55aa793676653f77b02d9e6920f49a4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b55aa793676653f77b02d9e6920f49a4 columnFamilyName f 2023-07-21 18:14:33,277 INFO [StoreOpener-b55aa793676653f77b02d9e6920f49a4-1] regionserver.HStore(310): Store=b55aa793676653f77b02d9e6920f49a4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:33,278 DEBUG [StoreOpener-37d0e541d149bcfeb77fc44839acfc02-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/37d0e541d149bcfeb77fc44839acfc02/f 2023-07-21 18:14:33,278 DEBUG [StoreOpener-37d0e541d149bcfeb77fc44839acfc02-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/37d0e541d149bcfeb77fc44839acfc02/f 2023-07-21 18:14:33,278 INFO [StoreOpener-37d0e541d149bcfeb77fc44839acfc02-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 37d0e541d149bcfeb77fc44839acfc02 columnFamilyName f 2023-07-21 18:14:33,278 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/b55aa793676653f77b02d9e6920f49a4 2023-07-21 18:14:33,279 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/b55aa793676653f77b02d9e6920f49a4 2023-07-21 18:14:33,282 INFO [StoreOpener-37d0e541d149bcfeb77fc44839acfc02-1] regionserver.HStore(310): Store=37d0e541d149bcfeb77fc44839acfc02/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:33,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b55aa793676653f77b02d9e6920f49a4 2023-07-21 18:14:33,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/37d0e541d149bcfeb77fc44839acfc02 2023-07-21 18:14:33,285 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/37d0e541d149bcfeb77fc44839acfc02 2023-07-21 18:14:33,288 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/b55aa793676653f77b02d9e6920f49a4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:33,289 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b55aa793676653f77b02d9e6920f49a4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11264080800, jitterRate=0.04904927313327789}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:33,289 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b55aa793676653f77b02d9e6920f49a4: 2023-07-21 18:14:33,289 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 37d0e541d149bcfeb77fc44839acfc02 2023-07-21 18:14:33,290 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689963272760.b55aa793676653f77b02d9e6920f49a4., pid=61, masterSystemTime=1689963273267 2023-07-21 18:14:33,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689963272760.b55aa793676653f77b02d9e6920f49a4. 2023-07-21 18:14:33,293 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689963272760.b55aa793676653f77b02d9e6920f49a4. 2023-07-21 18:14:33,293 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963272761.60a36ac1167cc97e792f0d288a8abba2. 2023-07-21 18:14:33,293 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=b55aa793676653f77b02d9e6920f49a4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:33,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 60a36ac1167cc97e792f0d288a8abba2, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963272761.60a36ac1167cc97e792f0d288a8abba2.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-21 18:14:33,293 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689963272760.b55aa793676653f77b02d9e6920f49a4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963273293"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963273293"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963273293"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963273293"}]},"ts":"1689963273293"} 2023-07-21 18:14:33,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 60a36ac1167cc97e792f0d288a8abba2 2023-07-21 18:14:33,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963272761.60a36ac1167cc97e792f0d288a8abba2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:33,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 60a36ac1167cc97e792f0d288a8abba2 2023-07-21 18:14:33,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 60a36ac1167cc97e792f0d288a8abba2 2023-07-21 18:14:33,298 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=53 2023-07-21 18:14:33,298 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=53, state=SUCCESS; OpenRegionProcedure b55aa793676653f77b02d9e6920f49a4, server=jenkins-hbase4.apache.org,43419,1689963263425 in 177 msec 2023-07-21 18:14:33,300 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b55aa793676653f77b02d9e6920f49a4, ASSIGN in 349 msec 2023-07-21 18:14:33,305 INFO [StoreOpener-60a36ac1167cc97e792f0d288a8abba2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 60a36ac1167cc97e792f0d288a8abba2 2023-07-21 18:14:33,305 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/37d0e541d149bcfeb77fc44839acfc02/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:33,307 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 37d0e541d149bcfeb77fc44839acfc02; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9739186080, jitterRate=-0.09296761453151703}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:33,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 37d0e541d149bcfeb77fc44839acfc02: 2023-07-21 18:14:33,308 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689963272760.37d0e541d149bcfeb77fc44839acfc02., pid=62, masterSystemTime=1689963273268 2023-07-21 18:14:33,309 DEBUG [StoreOpener-60a36ac1167cc97e792f0d288a8abba2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/60a36ac1167cc97e792f0d288a8abba2/f 2023-07-21 18:14:33,309 DEBUG [StoreOpener-60a36ac1167cc97e792f0d288a8abba2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/60a36ac1167cc97e792f0d288a8abba2/f 2023-07-21 18:14:33,309 INFO [StoreOpener-60a36ac1167cc97e792f0d288a8abba2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 60a36ac1167cc97e792f0d288a8abba2 columnFamilyName f 2023-07-21 18:14:33,311 INFO [StoreOpener-60a36ac1167cc97e792f0d288a8abba2-1] regionserver.HStore(310): Store=60a36ac1167cc97e792f0d288a8abba2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:33,311 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689963272760.37d0e541d149bcfeb77fc44839acfc02. 2023-07-21 18:14:33,311 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689963272760.37d0e541d149bcfeb77fc44839acfc02. 2023-07-21 18:14:33,311 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963272760.cf2d2f1ccb766fac970ec8cd2e6f3204. 2023-07-21 18:14:33,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cf2d2f1ccb766fac970ec8cd2e6f3204, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963272760.cf2d2f1ccb766fac970ec8cd2e6f3204.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-21 18:14:33,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/60a36ac1167cc97e792f0d288a8abba2 2023-07-21 18:14:33,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop cf2d2f1ccb766fac970ec8cd2e6f3204 2023-07-21 18:14:33,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963272760.cf2d2f1ccb766fac970ec8cd2e6f3204.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:33,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cf2d2f1ccb766fac970ec8cd2e6f3204 2023-07-21 18:14:33,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cf2d2f1ccb766fac970ec8cd2e6f3204 2023-07-21 18:14:33,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/60a36ac1167cc97e792f0d288a8abba2 2023-07-21 18:14:33,313 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=37d0e541d149bcfeb77fc44839acfc02, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:33,313 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689963272760.37d0e541d149bcfeb77fc44839acfc02.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963273313"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963273313"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963273313"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963273313"}]},"ts":"1689963273313"} 2023-07-21 18:14:33,314 INFO [StoreOpener-cf2d2f1ccb766fac970ec8cd2e6f3204-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cf2d2f1ccb766fac970ec8cd2e6f3204 2023-07-21 18:14:33,317 DEBUG [StoreOpener-cf2d2f1ccb766fac970ec8cd2e6f3204-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/cf2d2f1ccb766fac970ec8cd2e6f3204/f 2023-07-21 18:14:33,317 DEBUG [StoreOpener-cf2d2f1ccb766fac970ec8cd2e6f3204-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/cf2d2f1ccb766fac970ec8cd2e6f3204/f 2023-07-21 18:14:33,318 INFO [StoreOpener-cf2d2f1ccb766fac970ec8cd2e6f3204-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cf2d2f1ccb766fac970ec8cd2e6f3204 columnFamilyName f 2023-07-21 18:14:33,318 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 60a36ac1167cc97e792f0d288a8abba2 2023-07-21 18:14:33,319 INFO [StoreOpener-cf2d2f1ccb766fac970ec8cd2e6f3204-1] regionserver.HStore(310): Store=cf2d2f1ccb766fac970ec8cd2e6f3204/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:33,321 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/cf2d2f1ccb766fac970ec8cd2e6f3204 2023-07-21 18:14:33,322 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/cf2d2f1ccb766fac970ec8cd2e6f3204 2023-07-21 18:14:33,322 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/60a36ac1167cc97e792f0d288a8abba2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:33,323 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=54 2023-07-21 18:14:33,323 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=54, state=SUCCESS; OpenRegionProcedure 37d0e541d149bcfeb77fc44839acfc02, server=jenkins-hbase4.apache.org,41863,1689963267427 in 195 msec 2023-07-21 18:14:33,324 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 60a36ac1167cc97e792f0d288a8abba2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9906686080, jitterRate=-0.07736796140670776}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:33,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-21 18:14:33,325 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 60a36ac1167cc97e792f0d288a8abba2: 2023-07-21 18:14:33,326 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=37d0e541d149bcfeb77fc44839acfc02, ASSIGN in 371 msec 2023-07-21 18:14:33,326 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963272761.60a36ac1167cc97e792f0d288a8abba2., pid=58, masterSystemTime=1689963273267 2023-07-21 18:14:33,328 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cf2d2f1ccb766fac970ec8cd2e6f3204 2023-07-21 18:14:33,329 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963272761.60a36ac1167cc97e792f0d288a8abba2. 2023-07-21 18:14:33,329 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963272761.60a36ac1167cc97e792f0d288a8abba2. 2023-07-21 18:14:33,329 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=60a36ac1167cc97e792f0d288a8abba2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:33,330 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689963272761.60a36ac1167cc97e792f0d288a8abba2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963273329"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963273329"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963273329"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963273329"}]},"ts":"1689963273329"} 2023-07-21 18:14:33,331 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/cf2d2f1ccb766fac970ec8cd2e6f3204/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:33,332 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cf2d2f1ccb766fac970ec8cd2e6f3204; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12051916000, jitterRate=0.12242214381694794}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:33,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cf2d2f1ccb766fac970ec8cd2e6f3204: 2023-07-21 18:14:33,333 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963272760.cf2d2f1ccb766fac970ec8cd2e6f3204., pid=60, masterSystemTime=1689963273268 2023-07-21 18:14:33,336 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963272760.cf2d2f1ccb766fac970ec8cd2e6f3204. 2023-07-21 18:14:33,336 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963272760.cf2d2f1ccb766fac970ec8cd2e6f3204. 2023-07-21 18:14:33,336 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689963272761.fbf1a282059815bd5989ff24e5005534. 2023-07-21 18:14:33,336 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fbf1a282059815bd5989ff24e5005534, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689963272761.fbf1a282059815bd5989ff24e5005534.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-21 18:14:33,336 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop fbf1a282059815bd5989ff24e5005534 2023-07-21 18:14:33,336 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=56 2023-07-21 18:14:33,336 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=cf2d2f1ccb766fac970ec8cd2e6f3204, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:33,336 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=56, state=SUCCESS; OpenRegionProcedure 60a36ac1167cc97e792f0d288a8abba2, server=jenkins-hbase4.apache.org,43419,1689963263425 in 218 msec 2023-07-21 18:14:33,336 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689963272761.fbf1a282059815bd5989ff24e5005534.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:33,337 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fbf1a282059815bd5989ff24e5005534 2023-07-21 18:14:33,337 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689963272760.cf2d2f1ccb766fac970ec8cd2e6f3204.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963273336"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963273336"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963273336"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963273336"}]},"ts":"1689963273336"} 2023-07-21 18:14:33,337 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fbf1a282059815bd5989ff24e5005534 2023-07-21 18:14:33,342 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=60a36ac1167cc97e792f0d288a8abba2, ASSIGN in 384 msec 2023-07-21 18:14:33,343 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=55 2023-07-21 18:14:33,343 INFO [StoreOpener-fbf1a282059815bd5989ff24e5005534-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region fbf1a282059815bd5989ff24e5005534 2023-07-21 18:14:33,343 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=55, state=SUCCESS; OpenRegionProcedure cf2d2f1ccb766fac970ec8cd2e6f3204, server=jenkins-hbase4.apache.org,41863,1689963267427 in 222 msec 2023-07-21 18:14:33,345 DEBUG [StoreOpener-fbf1a282059815bd5989ff24e5005534-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/fbf1a282059815bd5989ff24e5005534/f 2023-07-21 18:14:33,345 DEBUG [StoreOpener-fbf1a282059815bd5989ff24e5005534-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/fbf1a282059815bd5989ff24e5005534/f 2023-07-21 18:14:33,346 INFO [StoreOpener-fbf1a282059815bd5989ff24e5005534-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fbf1a282059815bd5989ff24e5005534 columnFamilyName f 2023-07-21 18:14:33,347 INFO [StoreOpener-fbf1a282059815bd5989ff24e5005534-1] regionserver.HStore(310): Store=fbf1a282059815bd5989ff24e5005534/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:33,348 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/fbf1a282059815bd5989ff24e5005534 2023-07-21 18:14:33,348 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/fbf1a282059815bd5989ff24e5005534 2023-07-21 18:14:33,348 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cf2d2f1ccb766fac970ec8cd2e6f3204, ASSIGN in 391 msec 2023-07-21 18:14:33,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fbf1a282059815bd5989ff24e5005534 2023-07-21 18:14:33,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/fbf1a282059815bd5989ff24e5005534/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:33,355 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fbf1a282059815bd5989ff24e5005534; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10127738400, jitterRate=-0.0567808598279953}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:33,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fbf1a282059815bd5989ff24e5005534: 2023-07-21 18:14:33,357 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689963272761.fbf1a282059815bd5989ff24e5005534., pid=59, masterSystemTime=1689963273268 2023-07-21 18:14:33,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689963272761.fbf1a282059815bd5989ff24e5005534. 2023-07-21 18:14:33,361 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689963272761.fbf1a282059815bd5989ff24e5005534. 2023-07-21 18:14:33,363 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=fbf1a282059815bd5989ff24e5005534, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:33,364 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689963272761.fbf1a282059815bd5989ff24e5005534.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963273363"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963273363"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963273363"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963273363"}]},"ts":"1689963273363"} 2023-07-21 18:14:33,369 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=57 2023-07-21 18:14:33,369 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=57, state=SUCCESS; OpenRegionProcedure fbf1a282059815bd5989ff24e5005534, server=jenkins-hbase4.apache.org,41863,1689963267427 in 252 msec 2023-07-21 18:14:33,375 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=52 2023-07-21 18:14:33,375 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fbf1a282059815bd5989ff24e5005534, ASSIGN in 417 msec 2023-07-21 18:14:33,375 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963273375"}]},"ts":"1689963273375"} 2023-07-21 18:14:33,377 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-21 18:14:33,379 DEBUG [PEWorker-5] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-21 18:14:33,381 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=52, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 671 msec 2023-07-21 18:14:33,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-21 18:14:33,826 INFO [Listener at localhost/36435] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 52 completed 2023-07-21 18:14:33,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_975014563 2023-07-21 18:14:33,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:33,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_975014563 2023-07-21 18:14:33,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:33,830 INFO [Listener at localhost/36435] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-21 18:14:33,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-21 18:14:33,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=63, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 18:14:33,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-21 18:14:33,835 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963273835"}]},"ts":"1689963273835"} 2023-07-21 18:14:33,837 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-21 18:14:33,839 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-21 18:14:33,840 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b55aa793676653f77b02d9e6920f49a4, UNASSIGN}, {pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=37d0e541d149bcfeb77fc44839acfc02, UNASSIGN}, {pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cf2d2f1ccb766fac970ec8cd2e6f3204, UNASSIGN}, {pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=60a36ac1167cc97e792f0d288a8abba2, UNASSIGN}, {pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fbf1a282059815bd5989ff24e5005534, UNASSIGN}] 2023-07-21 18:14:33,842 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=37d0e541d149bcfeb77fc44839acfc02, UNASSIGN 2023-07-21 18:14:33,843 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cf2d2f1ccb766fac970ec8cd2e6f3204, UNASSIGN 2023-07-21 18:14:33,843 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b55aa793676653f77b02d9e6920f49a4, UNASSIGN 2023-07-21 18:14:33,843 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fbf1a282059815bd5989ff24e5005534, UNASSIGN 2023-07-21 18:14:33,843 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=60a36ac1167cc97e792f0d288a8abba2, UNASSIGN 2023-07-21 18:14:33,844 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=37d0e541d149bcfeb77fc44839acfc02, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:33,845 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689963272760.37d0e541d149bcfeb77fc44839acfc02.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963273844"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963273844"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963273844"}]},"ts":"1689963273844"} 2023-07-21 18:14:33,845 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=cf2d2f1ccb766fac970ec8cd2e6f3204, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:33,845 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=60a36ac1167cc97e792f0d288a8abba2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:33,845 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689963272760.cf2d2f1ccb766fac970ec8cd2e6f3204.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963273845"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963273845"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963273845"}]},"ts":"1689963273845"} 2023-07-21 18:14:33,845 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689963272761.60a36ac1167cc97e792f0d288a8abba2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963273845"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963273845"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963273845"}]},"ts":"1689963273845"} 2023-07-21 18:14:33,846 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=b55aa793676653f77b02d9e6920f49a4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:33,846 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689963272760.b55aa793676653f77b02d9e6920f49a4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963273846"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963273846"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963273846"}]},"ts":"1689963273846"} 2023-07-21 18:14:33,846 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=fbf1a282059815bd5989ff24e5005534, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:33,846 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689963272761.fbf1a282059815bd5989ff24e5005534.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963273846"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963273846"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963273846"}]},"ts":"1689963273846"} 2023-07-21 18:14:33,847 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=65, state=RUNNABLE; CloseRegionProcedure 37d0e541d149bcfeb77fc44839acfc02, server=jenkins-hbase4.apache.org,41863,1689963267427}] 2023-07-21 18:14:33,850 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=66, state=RUNNABLE; CloseRegionProcedure cf2d2f1ccb766fac970ec8cd2e6f3204, server=jenkins-hbase4.apache.org,41863,1689963267427}] 2023-07-21 18:14:33,850 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=71, ppid=67, state=RUNNABLE; CloseRegionProcedure 60a36ac1167cc97e792f0d288a8abba2, server=jenkins-hbase4.apache.org,43419,1689963263425}] 2023-07-21 18:14:33,851 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=64, state=RUNNABLE; CloseRegionProcedure b55aa793676653f77b02d9e6920f49a4, server=jenkins-hbase4.apache.org,43419,1689963263425}] 2023-07-21 18:14:33,852 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=68, state=RUNNABLE; CloseRegionProcedure fbf1a282059815bd5989ff24e5005534, server=jenkins-hbase4.apache.org,41863,1689963267427}] 2023-07-21 18:14:33,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-21 18:14:34,004 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 37d0e541d149bcfeb77fc44839acfc02 2023-07-21 18:14:34,004 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 60a36ac1167cc97e792f0d288a8abba2 2023-07-21 18:14:34,006 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 37d0e541d149bcfeb77fc44839acfc02, disabling compactions & flushes 2023-07-21 18:14:34,007 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 60a36ac1167cc97e792f0d288a8abba2, disabling compactions & flushes 2023-07-21 18:14:34,007 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689963272760.37d0e541d149bcfeb77fc44839acfc02. 2023-07-21 18:14:34,007 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963272761.60a36ac1167cc97e792f0d288a8abba2. 2023-07-21 18:14:34,007 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689963272760.37d0e541d149bcfeb77fc44839acfc02. 2023-07-21 18:14:34,007 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963272761.60a36ac1167cc97e792f0d288a8abba2. 2023-07-21 18:14:34,007 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689963272760.37d0e541d149bcfeb77fc44839acfc02. after waiting 0 ms 2023-07-21 18:14:34,007 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963272761.60a36ac1167cc97e792f0d288a8abba2. after waiting 0 ms 2023-07-21 18:14:34,007 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689963272760.37d0e541d149bcfeb77fc44839acfc02. 2023-07-21 18:14:34,007 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963272761.60a36ac1167cc97e792f0d288a8abba2. 2023-07-21 18:14:34,015 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/60a36ac1167cc97e792f0d288a8abba2/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 18:14:34,015 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/37d0e541d149bcfeb77fc44839acfc02/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 18:14:34,016 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963272761.60a36ac1167cc97e792f0d288a8abba2. 2023-07-21 18:14:34,016 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 60a36ac1167cc97e792f0d288a8abba2: 2023-07-21 18:14:34,016 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689963272760.37d0e541d149bcfeb77fc44839acfc02. 2023-07-21 18:14:34,016 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 37d0e541d149bcfeb77fc44839acfc02: 2023-07-21 18:14:34,019 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 60a36ac1167cc97e792f0d288a8abba2 2023-07-21 18:14:34,019 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b55aa793676653f77b02d9e6920f49a4 2023-07-21 18:14:34,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b55aa793676653f77b02d9e6920f49a4, disabling compactions & flushes 2023-07-21 18:14:34,020 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689963272760.b55aa793676653f77b02d9e6920f49a4. 2023-07-21 18:14:34,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689963272760.b55aa793676653f77b02d9e6920f49a4. 2023-07-21 18:14:34,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689963272760.b55aa793676653f77b02d9e6920f49a4. after waiting 0 ms 2023-07-21 18:14:34,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689963272760.b55aa793676653f77b02d9e6920f49a4. 2023-07-21 18:14:34,020 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=60a36ac1167cc97e792f0d288a8abba2, regionState=CLOSED 2023-07-21 18:14:34,020 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689963272761.60a36ac1167cc97e792f0d288a8abba2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963274020"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963274020"}]},"ts":"1689963274020"} 2023-07-21 18:14:34,021 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 37d0e541d149bcfeb77fc44839acfc02 2023-07-21 18:14:34,021 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cf2d2f1ccb766fac970ec8cd2e6f3204 2023-07-21 18:14:34,022 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cf2d2f1ccb766fac970ec8cd2e6f3204, disabling compactions & flushes 2023-07-21 18:14:34,022 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963272760.cf2d2f1ccb766fac970ec8cd2e6f3204. 2023-07-21 18:14:34,022 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963272760.cf2d2f1ccb766fac970ec8cd2e6f3204. 2023-07-21 18:14:34,022 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963272760.cf2d2f1ccb766fac970ec8cd2e6f3204. after waiting 0 ms 2023-07-21 18:14:34,022 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963272760.cf2d2f1ccb766fac970ec8cd2e6f3204. 2023-07-21 18:14:34,023 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=37d0e541d149bcfeb77fc44839acfc02, regionState=CLOSED 2023-07-21 18:14:34,023 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689963272760.37d0e541d149bcfeb77fc44839acfc02.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963274023"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963274023"}]},"ts":"1689963274023"} 2023-07-21 18:14:34,030 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=71, resume processing ppid=67 2023-07-21 18:14:34,030 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=67, state=SUCCESS; CloseRegionProcedure 60a36ac1167cc97e792f0d288a8abba2, server=jenkins-hbase4.apache.org,43419,1689963263425 in 174 msec 2023-07-21 18:14:34,031 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=65 2023-07-21 18:14:34,031 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=65, state=SUCCESS; CloseRegionProcedure 37d0e541d149bcfeb77fc44839acfc02, server=jenkins-hbase4.apache.org,41863,1689963267427 in 179 msec 2023-07-21 18:14:34,031 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/b55aa793676653f77b02d9e6920f49a4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 18:14:34,032 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/cf2d2f1ccb766fac970ec8cd2e6f3204/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 18:14:34,032 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=60a36ac1167cc97e792f0d288a8abba2, UNASSIGN in 190 msec 2023-07-21 18:14:34,032 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963272760.cf2d2f1ccb766fac970ec8cd2e6f3204. 2023-07-21 18:14:34,032 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cf2d2f1ccb766fac970ec8cd2e6f3204: 2023-07-21 18:14:34,033 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=37d0e541d149bcfeb77fc44839acfc02, UNASSIGN in 191 msec 2023-07-21 18:14:34,033 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689963272760.b55aa793676653f77b02d9e6920f49a4. 2023-07-21 18:14:34,033 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b55aa793676653f77b02d9e6920f49a4: 2023-07-21 18:14:34,034 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cf2d2f1ccb766fac970ec8cd2e6f3204 2023-07-21 18:14:34,034 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close fbf1a282059815bd5989ff24e5005534 2023-07-21 18:14:34,035 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fbf1a282059815bd5989ff24e5005534, disabling compactions & flushes 2023-07-21 18:14:34,036 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689963272761.fbf1a282059815bd5989ff24e5005534. 2023-07-21 18:14:34,036 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689963272761.fbf1a282059815bd5989ff24e5005534. 2023-07-21 18:14:34,036 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689963272761.fbf1a282059815bd5989ff24e5005534. after waiting 0 ms 2023-07-21 18:14:34,036 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689963272761.fbf1a282059815bd5989ff24e5005534. 2023-07-21 18:14:34,046 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testTableMoveTruncateAndDrop/fbf1a282059815bd5989ff24e5005534/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 18:14:34,047 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689963272761.fbf1a282059815bd5989ff24e5005534. 2023-07-21 18:14:34,047 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fbf1a282059815bd5989ff24e5005534: 2023-07-21 18:14:34,049 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=cf2d2f1ccb766fac970ec8cd2e6f3204, regionState=CLOSED 2023-07-21 18:14:34,049 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689963272760.cf2d2f1ccb766fac970ec8cd2e6f3204.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689963274049"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963274049"}]},"ts":"1689963274049"} 2023-07-21 18:14:34,051 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b55aa793676653f77b02d9e6920f49a4 2023-07-21 18:14:34,052 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=b55aa793676653f77b02d9e6920f49a4, regionState=CLOSED 2023-07-21 18:14:34,052 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689963272760.b55aa793676653f77b02d9e6920f49a4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963274052"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963274052"}]},"ts":"1689963274052"} 2023-07-21 18:14:34,055 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed fbf1a282059815bd5989ff24e5005534 2023-07-21 18:14:34,059 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=fbf1a282059815bd5989ff24e5005534, regionState=CLOSED 2023-07-21 18:14:34,059 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689963272761.fbf1a282059815bd5989ff24e5005534.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689963274059"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963274059"}]},"ts":"1689963274059"} 2023-07-21 18:14:34,061 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=66 2023-07-21 18:14:34,061 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=66, state=SUCCESS; CloseRegionProcedure cf2d2f1ccb766fac970ec8cd2e6f3204, server=jenkins-hbase4.apache.org,41863,1689963267427 in 201 msec 2023-07-21 18:14:34,061 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=64 2023-07-21 18:14:34,061 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=64, state=SUCCESS; CloseRegionProcedure b55aa793676653f77b02d9e6920f49a4, server=jenkins-hbase4.apache.org,43419,1689963263425 in 203 msec 2023-07-21 18:14:34,063 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b55aa793676653f77b02d9e6920f49a4, UNASSIGN in 221 msec 2023-07-21 18:14:34,063 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cf2d2f1ccb766fac970ec8cd2e6f3204, UNASSIGN in 221 msec 2023-07-21 18:14:34,065 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=68 2023-07-21 18:14:34,065 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=68, state=SUCCESS; CloseRegionProcedure fbf1a282059815bd5989ff24e5005534, server=jenkins-hbase4.apache.org,41863,1689963267427 in 210 msec 2023-07-21 18:14:34,067 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=63 2023-07-21 18:14:34,068 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fbf1a282059815bd5989ff24e5005534, UNASSIGN in 225 msec 2023-07-21 18:14:34,069 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963274069"}]},"ts":"1689963274069"} 2023-07-21 18:14:34,072 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-21 18:14:34,076 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-21 18:14:34,079 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=63, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 248 msec 2023-07-21 18:14:34,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-21 18:14:34,138 INFO [Listener at localhost/36435] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 63 completed 2023-07-21 18:14:34,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-21 18:14:34,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 18:14:34,156 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 18:14:34,156 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_975014563' 2023-07-21 18:14:34,158 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=74, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 18:14:34,165 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:34,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_975014563 2023-07-21 18:14:34,169 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b55aa793676653f77b02d9e6920f49a4 2023-07-21 18:14:34,169 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fbf1a282059815bd5989ff24e5005534 2023-07-21 18:14:34,169 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/60a36ac1167cc97e792f0d288a8abba2 2023-07-21 18:14:34,169 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cf2d2f1ccb766fac970ec8cd2e6f3204 2023-07-21 18:14:34,169 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/37d0e541d149bcfeb77fc44839acfc02 2023-07-21 18:14:34,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:34,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:34,178 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b55aa793676653f77b02d9e6920f49a4/f, FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b55aa793676653f77b02d9e6920f49a4/recovered.edits] 2023-07-21 18:14:34,178 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fbf1a282059815bd5989ff24e5005534/f, FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fbf1a282059815bd5989ff24e5005534/recovered.edits] 2023-07-21 18:14:34,178 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/37d0e541d149bcfeb77fc44839acfc02/f, FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/37d0e541d149bcfeb77fc44839acfc02/recovered.edits] 2023-07-21 18:14:34,178 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/60a36ac1167cc97e792f0d288a8abba2/f, FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/60a36ac1167cc97e792f0d288a8abba2/recovered.edits] 2023-07-21 18:14:34,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-21 18:14:34,190 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b55aa793676653f77b02d9e6920f49a4/recovered.edits/4.seqid to hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/archive/data/default/Group_testTableMoveTruncateAndDrop/b55aa793676653f77b02d9e6920f49a4/recovered.edits/4.seqid 2023-07-21 18:14:34,190 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/60a36ac1167cc97e792f0d288a8abba2/recovered.edits/4.seqid to hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/archive/data/default/Group_testTableMoveTruncateAndDrop/60a36ac1167cc97e792f0d288a8abba2/recovered.edits/4.seqid 2023-07-21 18:14:34,190 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fbf1a282059815bd5989ff24e5005534/recovered.edits/4.seqid to hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/archive/data/default/Group_testTableMoveTruncateAndDrop/fbf1a282059815bd5989ff24e5005534/recovered.edits/4.seqid 2023-07-21 18:14:34,191 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b55aa793676653f77b02d9e6920f49a4 2023-07-21 18:14:34,191 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/60a36ac1167cc97e792f0d288a8abba2 2023-07-21 18:14:34,192 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/37d0e541d149bcfeb77fc44839acfc02/recovered.edits/4.seqid to hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/archive/data/default/Group_testTableMoveTruncateAndDrop/37d0e541d149bcfeb77fc44839acfc02/recovered.edits/4.seqid 2023-07-21 18:14:34,192 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cf2d2f1ccb766fac970ec8cd2e6f3204/f, FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cf2d2f1ccb766fac970ec8cd2e6f3204/recovered.edits] 2023-07-21 18:14:34,192 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fbf1a282059815bd5989ff24e5005534 2023-07-21 18:14:34,193 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/37d0e541d149bcfeb77fc44839acfc02 2023-07-21 18:14:34,200 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cf2d2f1ccb766fac970ec8cd2e6f3204/recovered.edits/4.seqid to hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/archive/data/default/Group_testTableMoveTruncateAndDrop/cf2d2f1ccb766fac970ec8cd2e6f3204/recovered.edits/4.seqid 2023-07-21 18:14:34,200 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cf2d2f1ccb766fac970ec8cd2e6f3204 2023-07-21 18:14:34,201 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-21 18:14:34,204 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=74, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 18:14:34,212 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-21 18:14:34,220 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-21 18:14:34,222 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=74, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 18:14:34,223 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-21 18:14:34,223 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689963272760.b55aa793676653f77b02d9e6920f49a4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689963274223"}]},"ts":"9223372036854775807"} 2023-07-21 18:14:34,223 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689963272760.37d0e541d149bcfeb77fc44839acfc02.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689963274223"}]},"ts":"9223372036854775807"} 2023-07-21 18:14:34,223 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689963272760.cf2d2f1ccb766fac970ec8cd2e6f3204.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689963274223"}]},"ts":"9223372036854775807"} 2023-07-21 18:14:34,223 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689963272761.60a36ac1167cc97e792f0d288a8abba2.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689963274223"}]},"ts":"9223372036854775807"} 2023-07-21 18:14:34,223 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689963272761.fbf1a282059815bd5989ff24e5005534.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689963274223"}]},"ts":"9223372036854775807"} 2023-07-21 18:14:34,226 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-21 18:14:34,226 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => b55aa793676653f77b02d9e6920f49a4, NAME => 'Group_testTableMoveTruncateAndDrop,,1689963272760.b55aa793676653f77b02d9e6920f49a4.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 37d0e541d149bcfeb77fc44839acfc02, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689963272760.37d0e541d149bcfeb77fc44839acfc02.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => cf2d2f1ccb766fac970ec8cd2e6f3204, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689963272760.cf2d2f1ccb766fac970ec8cd2e6f3204.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 60a36ac1167cc97e792f0d288a8abba2, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689963272761.60a36ac1167cc97e792f0d288a8abba2.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => fbf1a282059815bd5989ff24e5005534, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689963272761.fbf1a282059815bd5989ff24e5005534.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-21 18:14:34,226 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-21 18:14:34,226 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689963274226"}]},"ts":"9223372036854775807"} 2023-07-21 18:14:34,229 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-21 18:14:34,232 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=74, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 18:14:34,237 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=74, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 88 msec 2023-07-21 18:14:34,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-21 18:14:34,284 INFO [Listener at localhost/36435] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 74 completed 2023-07-21 18:14:34,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_975014563 2023-07-21 18:14:34,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:34,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:34,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:34,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:14:34,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:14:34,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:34,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419] to rsgroup default 2023-07-21 18:14:34,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:34,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_975014563 2023-07-21 18:14:34,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:34,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:34,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_975014563, current retry=0 2023-07-21 18:14:34,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41863,1689963267427, jenkins-hbase4.apache.org,43419,1689963263425] are moved back to Group_testTableMoveTruncateAndDrop_975014563 2023-07-21 18:14:34,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_975014563 => default 2023-07-21 18:14:34,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:34,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_975014563 2023-07-21 18:14:34,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:34,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:34,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 18:14:34,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:14:34,327 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:14:34,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:14:34,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:34,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 18:14:34,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:34,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 18:14:34,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:34,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 18:14:34,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:14:34,343 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 18:14:34,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 18:14:34,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:34,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:34,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:14:34,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:14:34,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:34,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:34,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45593] to rsgroup master 2023-07-21 18:14:34,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:34,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 149 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53692 deadline: 1689964474361, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. 2023-07-21 18:14:34,362 WARN [Listener at localhost/36435] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 18:14:34,364 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:34,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:34,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:34,366 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419, jenkins-hbase4.apache.org:44049, jenkins-hbase4.apache.org:46437], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 18:14:34,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:14:34,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:34,398 INFO [Listener at localhost/36435] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=505 (was 423) Potentially hanging thread: jenkins-hbase4:41863Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x23967352-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41863 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41863 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x23967352-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64847@0x694ffec5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$89/999208717.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp812722513-638-acceptor-0@5cdf9472-ServerConnector@b0e8cf{HTTP/1.1, (http/1.1)}{0.0.0.0:42549} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41863 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-33705365_17 at /127.0.0.1:49804 [Receiving block BP-1274896498-172.31.14.131-1689963257910:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-33705365_17 at /127.0.0.1:47904 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41863 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1274896498-172.31.14.131-1689963257910:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41863 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x23967352-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1274896498-172.31.14.131-1689963257910:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41863 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966-prefix:jenkins-hbase4.apache.org,44049,1689963263942.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2138724152_17 at /127.0.0.1:49854 [Receiving block BP-1274896498-172.31.14.131-1689963257910:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp812722513-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:41863 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41863 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1038601593_17 at /127.0.0.1:47896 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64847@0x694ffec5-SendThread(127.0.0.1:64847) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-33705365_17 at /127.0.0.1:50690 [Receiving block BP-1274896498-172.31.14.131-1689963257910:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1274896498-172.31.14.131-1689963257910:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:41863-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:37139 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x23967352-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:37139 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x23967352-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2138724152_17 at /127.0.0.1:50702 [Receiving block BP-1274896498-172.31.14.131-1689963257910:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41863 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41863 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2138724152_17 at /127.0.0.1:53966 [Receiving block BP-1274896498-172.31.14.131-1689963257910:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1038601593_17 at /127.0.0.1:49838 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41863 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp812722513-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x23967352-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64847@0x694ffec5-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: hconnection-0x54c19f7b-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x54c19f7b-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp812722513-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp812722513-637 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/122747294.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-33705365_17 at /127.0.0.1:50570 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966-prefix:jenkins-hbase4.apache.org,41863,1689963267427 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp812722513-644 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1274896498-172.31.14.131-1689963257910:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-33705365_17 at /127.0.0.1:53918 [Receiving block BP-1274896498-172.31.14.131-1689963257910:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp812722513-643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp812722513-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-156cb12-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1274896498-172.31.14.131-1689963257910:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1274896498-172.31.14.131-1689963257910:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=813 (was 682) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=613 (was 605) - SystemLoadAverage LEAK? -, ProcessCount=174 (was 174), AvailableMemoryMB=8096 (was 8524) 2023-07-21 18:14:34,399 WARN [Listener at localhost/36435] hbase.ResourceChecker(130): Thread=505 is superior to 500 2023-07-21 18:14:34,421 INFO [Listener at localhost/36435] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=504, OpenFileDescriptor=813, MaxFileDescriptor=60000, SystemLoadAverage=613, ProcessCount=174, AvailableMemoryMB=8094 2023-07-21 18:14:34,422 WARN [Listener at localhost/36435] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-21 18:14:34,423 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-21 18:14:34,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:34,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:34,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:14:34,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:14:34,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:34,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 18:14:34,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:34,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 18:14:34,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:34,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 18:14:34,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:14:34,442 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 18:14:34,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 18:14:34,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:34,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:34,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:14:34,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:14:34,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:34,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:34,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45593] to rsgroup master 2023-07-21 18:14:34,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:34,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 177 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53692 deadline: 1689964474458, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. 2023-07-21 18:14:34,459 WARN [Listener at localhost/36435] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 18:14:34,461 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:34,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:34,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:34,462 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419, jenkins-hbase4.apache.org:44049, jenkins-hbase4.apache.org:46437], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 18:14:34,463 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:14:34,463 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:34,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-21 18:14:34,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:34,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 183 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:53692 deadline: 1689964474464, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-21 18:14:34,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-21 18:14:34,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:34,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 185 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:53692 deadline: 1689964474465, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-21 18:14:34,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-21 18:14:34,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:34,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 187 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:53692 deadline: 1689964474467, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-21 18:14:34,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-21 18:14:34,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-21 18:14:34,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:34,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:34,475 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:34,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:14:34,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:34,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:34,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:34,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:34,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:14:34,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:14:34,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:34,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 18:14:34,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:34,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-21 18:14:34,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:34,499 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:34,499 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 18:14:34,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:14:34,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:14:34,504 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:14:34,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:34,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 18:14:34,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:34,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 18:14:34,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:34,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 18:14:34,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:14:34,519 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 18:14:34,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 18:14:34,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:34,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:34,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:14:34,527 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:14:34,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:34,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:34,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45593] to rsgroup master 2023-07-21 18:14:34,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:34,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 221 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53692 deadline: 1689964474532, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. 2023-07-21 18:14:34,533 WARN [Listener at localhost/36435] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 18:14:34,535 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:34,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:34,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:34,536 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419, jenkins-hbase4.apache.org:44049, jenkins-hbase4.apache.org:46437], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 18:14:34,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:14:34,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:34,558 INFO [Listener at localhost/36435] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=507 (was 504) Potentially hanging thread: hconnection-0x54c19f7b-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x54c19f7b-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x54c19f7b-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=813 (was 813), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=613 (was 613), ProcessCount=174 (was 174), AvailableMemoryMB=8091 (was 8094) 2023-07-21 18:14:34,558 WARN [Listener at localhost/36435] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-21 18:14:34,580 INFO [Listener at localhost/36435] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=507, OpenFileDescriptor=813, MaxFileDescriptor=60000, SystemLoadAverage=613, ProcessCount=174, AvailableMemoryMB=8090 2023-07-21 18:14:34,580 WARN [Listener at localhost/36435] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-21 18:14:34,580 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-21 18:14:34,587 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:34,587 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:34,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:14:34,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:14:34,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:34,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 18:14:34,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:34,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 18:14:34,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:34,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 18:14:34,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:14:34,607 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 18:14:34,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 18:14:34,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:34,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:34,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:14:34,616 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:14:34,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:34,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:34,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45593] to rsgroup master 2023-07-21 18:14:34,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:34,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 249 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53692 deadline: 1689964474625, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. 2023-07-21 18:14:34,626 WARN [Listener at localhost/36435] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 18:14:34,627 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:34,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:34,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:34,629 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419, jenkins-hbase4.apache.org:44049, jenkins-hbase4.apache.org:46437], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 18:14:34,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:14:34,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:34,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:34,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:34,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:14:34,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:34,633 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-21 18:14:34,636 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:34,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 18:14:34,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:34,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:34,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:14:34,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:34,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:34,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44049, jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419] to rsgroup bar 2023-07-21 18:14:34,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:34,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 18:14:34,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:34,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:34,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(238): Moving server region 2a5ec5469486ef5b01d5318bdbcbddf7, which do not belong to RSGroup bar 2023-07-21 18:14:34,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=2a5ec5469486ef5b01d5318bdbcbddf7, REOPEN/MOVE 2023-07-21 18:14:34,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup bar 2023-07-21 18:14:34,668 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=2a5ec5469486ef5b01d5318bdbcbddf7, REOPEN/MOVE 2023-07-21 18:14:34,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-21 18:14:34,669 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=2a5ec5469486ef5b01d5318bdbcbddf7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:34,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-21 18:14:34,669 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689963274668"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963274668"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963274668"}]},"ts":"1689963274668"} 2023-07-21 18:14:34,671 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-21 18:14:34,672 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44049,1689963263942, state=CLOSING 2023-07-21 18:14:34,672 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=75, state=RUNNABLE; CloseRegionProcedure 2a5ec5469486ef5b01d5318bdbcbddf7, server=jenkins-hbase4.apache.org,44049,1689963263942}] 2023-07-21 18:14:34,675 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 18:14:34,675 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=76, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44049,1689963263942}] 2023-07-21 18:14:34,675 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 18:14:34,830 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2a5ec5469486ef5b01d5318bdbcbddf7 2023-07-21 18:14:34,832 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2a5ec5469486ef5b01d5318bdbcbddf7, disabling compactions & flushes 2023-07-21 18:14:34,832 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-21 18:14:34,832 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7. 2023-07-21 18:14:34,833 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 18:14:34,833 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7. 2023-07-21 18:14:34,833 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 18:14:34,833 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7. after waiting 0 ms 2023-07-21 18:14:34,833 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 18:14:34,833 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7. 2023-07-21 18:14:34,833 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 18:14:34,833 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 2a5ec5469486ef5b01d5318bdbcbddf7 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-21 18:14:34,833 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 18:14:34,833 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=37.96 KB heapSize=58.25 KB 2023-07-21 18:14:34,880 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=35.08 KB at sequenceid=97 (bloomFilter=false), to=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/.tmp/info/ae9608c24608433db1ffe0f9651c860e 2023-07-21 18:14:34,887 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/namespace/2a5ec5469486ef5b01d5318bdbcbddf7/.tmp/info/3771d3be9d3a40e8b355391ec4c75585 2023-07-21 18:14:34,892 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ae9608c24608433db1ffe0f9651c860e 2023-07-21 18:14:34,900 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/namespace/2a5ec5469486ef5b01d5318bdbcbddf7/.tmp/info/3771d3be9d3a40e8b355391ec4c75585 as hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/namespace/2a5ec5469486ef5b01d5318bdbcbddf7/info/3771d3be9d3a40e8b355391ec4c75585 2023-07-21 18:14:34,910 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/namespace/2a5ec5469486ef5b01d5318bdbcbddf7/info/3771d3be9d3a40e8b355391ec4c75585, entries=2, sequenceid=6, filesize=4.8 K 2023-07-21 18:14:34,918 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 2a5ec5469486ef5b01d5318bdbcbddf7 in 85ms, sequenceid=6, compaction requested=false 2023-07-21 18:14:34,929 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.15 KB at sequenceid=97 (bloomFilter=false), to=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/.tmp/rep_barrier/a052276ed896484086995a941b41dbce 2023-07-21 18:14:34,936 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/namespace/2a5ec5469486ef5b01d5318bdbcbddf7/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-21 18:14:34,937 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7. 2023-07-21 18:14:34,937 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2a5ec5469486ef5b01d5318bdbcbddf7: 2023-07-21 18:14:34,937 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 2a5ec5469486ef5b01d5318bdbcbddf7 move to jenkins-hbase4.apache.org,46437,1689963263715 record at close sequenceid=6 2023-07-21 18:14:34,939 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a052276ed896484086995a941b41dbce 2023-07-21 18:14:34,941 DEBUG [PEWorker-4] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=77, ppid=75, state=RUNNABLE; CloseRegionProcedure 2a5ec5469486ef5b01d5318bdbcbddf7, server=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:34,941 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2a5ec5469486ef5b01d5318bdbcbddf7 2023-07-21 18:14:34,965 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.73 KB at sequenceid=97 (bloomFilter=false), to=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/.tmp/table/4f17ff6f586a4ce3b036de146ab07f10 2023-07-21 18:14:34,972 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4f17ff6f586a4ce3b036de146ab07f10 2023-07-21 18:14:34,973 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/.tmp/info/ae9608c24608433db1ffe0f9651c860e as hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/info/ae9608c24608433db1ffe0f9651c860e 2023-07-21 18:14:34,980 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ae9608c24608433db1ffe0f9651c860e 2023-07-21 18:14:34,980 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/info/ae9608c24608433db1ffe0f9651c860e, entries=23, sequenceid=97, filesize=7.5 K 2023-07-21 18:14:34,981 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/.tmp/rep_barrier/a052276ed896484086995a941b41dbce as hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/rep_barrier/a052276ed896484086995a941b41dbce 2023-07-21 18:14:34,988 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a052276ed896484086995a941b41dbce 2023-07-21 18:14:34,988 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/rep_barrier/a052276ed896484086995a941b41dbce, entries=10, sequenceid=97, filesize=6.1 K 2023-07-21 18:14:34,989 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/.tmp/table/4f17ff6f586a4ce3b036de146ab07f10 as hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/table/4f17ff6f586a4ce3b036de146ab07f10 2023-07-21 18:14:34,996 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4f17ff6f586a4ce3b036de146ab07f10 2023-07-21 18:14:34,996 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/table/4f17ff6f586a4ce3b036de146ab07f10, entries=11, sequenceid=97, filesize=6.0 K 2023-07-21 18:14:34,997 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~37.96 KB/38875, heapSize ~58.20 KB/59600, currentSize=0 B/0 for 1588230740 in 164ms, sequenceid=97, compaction requested=false 2023-07-21 18:14:35,013 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/recovered.edits/100.seqid, newMaxSeqId=100, maxSeqId=17 2023-07-21 18:14:35,013 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 18:14:35,014 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 18:14:35,014 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 18:14:35,014 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,46437,1689963263715 record at close sequenceid=97 2023-07-21 18:14:35,021 WARN [PEWorker-5] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-21 18:14:35,022 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-21 18:14:35,024 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=76 2023-07-21 18:14:35,024 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=76, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44049,1689963263942 in 346 msec 2023-07-21 18:14:35,025 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46437,1689963263715; forceNewPlan=false, retain=false 2023-07-21 18:14:35,176 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46437,1689963263715, state=OPENING 2023-07-21 18:14:35,178 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 18:14:35,178 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 18:14:35,182 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=76, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46437,1689963263715}] 2023-07-21 18:14:35,358 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 18:14:35,358 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 18:14:35,361 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46437%2C1689963263715.meta, suffix=.meta, logDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/WALs/jenkins-hbase4.apache.org,46437,1689963263715, archiveDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/oldWALs, maxLogs=32 2023-07-21 18:14:35,383 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43391,DS-88c8b8eb-df08-4ca1-8ed0-e6d154d2ee62,DISK] 2023-07-21 18:14:35,392 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35467,DS-c8b5d9c7-8b97-4bf3-be00-3803219852c7,DISK] 2023-07-21 18:14:35,397 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33205,DS-86b9fc6a-e29a-4a47-b59b-dae87071f69c,DISK] 2023-07-21 18:14:35,408 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/WALs/jenkins-hbase4.apache.org,46437,1689963263715/jenkins-hbase4.apache.org%2C46437%2C1689963263715.meta.1689963275362.meta 2023-07-21 18:14:35,410 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43391,DS-88c8b8eb-df08-4ca1-8ed0-e6d154d2ee62,DISK], DatanodeInfoWithStorage[127.0.0.1:33205,DS-86b9fc6a-e29a-4a47-b59b-dae87071f69c,DISK], DatanodeInfoWithStorage[127.0.0.1:35467,DS-c8b5d9c7-8b97-4bf3-be00-3803219852c7,DISK]] 2023-07-21 18:14:35,411 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:14:35,411 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 18:14:35,411 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 18:14:35,411 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 18:14:35,412 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 18:14:35,412 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:35,412 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 18:14:35,412 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 18:14:35,414 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 18:14:35,415 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/info 2023-07-21 18:14:35,415 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/info 2023-07-21 18:14:35,416 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 18:14:35,425 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/info/a3a400ee97674892b06c033e8612e30c 2023-07-21 18:14:35,433 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ae9608c24608433db1ffe0f9651c860e 2023-07-21 18:14:35,433 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/info/ae9608c24608433db1ffe0f9651c860e 2023-07-21 18:14:35,433 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:35,433 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 18:14:35,435 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/rep_barrier 2023-07-21 18:14:35,435 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/rep_barrier 2023-07-21 18:14:35,435 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 18:14:35,444 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a052276ed896484086995a941b41dbce 2023-07-21 18:14:35,445 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/rep_barrier/a052276ed896484086995a941b41dbce 2023-07-21 18:14:35,445 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:35,445 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 18:14:35,447 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/table 2023-07-21 18:14:35,447 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/table 2023-07-21 18:14:35,448 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 18:14:35,455 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4f17ff6f586a4ce3b036de146ab07f10 2023-07-21 18:14:35,456 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/table/4f17ff6f586a4ce3b036de146ab07f10 2023-07-21 18:14:35,463 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/table/c64176eae9d7460fb0e81673ba29f3c1 2023-07-21 18:14:35,463 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:35,464 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740 2023-07-21 18:14:35,465 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740 2023-07-21 18:14:35,470 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 18:14:35,473 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 18:14:35,474 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=101; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11295585760, jitterRate=0.05198340117931366}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 18:14:35,474 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 18:14:35,479 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=79, masterSystemTime=1689963275341 2023-07-21 18:14:35,481 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 18:14:35,481 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 18:14:35,482 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46437,1689963263715, state=OPEN 2023-07-21 18:14:35,483 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 18:14:35,483 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 18:14:35,485 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=2a5ec5469486ef5b01d5318bdbcbddf7, regionState=CLOSED 2023-07-21 18:14:35,485 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689963275485"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963275485"}]},"ts":"1689963275485"} 2023-07-21 18:14:35,486 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44049] ipc.CallRunner(144): callId: 180 service: ClientService methodName: Mutate size: 218 connection: 172.31.14.131:42246 deadline: 1689963335486, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46437 startCode=1689963263715. As of locationSeqNum=97. 2023-07-21 18:14:35,487 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=76 2023-07-21 18:14:35,487 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=76, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46437,1689963263715 in 305 msec 2023-07-21 18:14:35,488 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=76, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 819 msec 2023-07-21 18:14:35,592 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=75 2023-07-21 18:14:35,592 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=75, state=SUCCESS; CloseRegionProcedure 2a5ec5469486ef5b01d5318bdbcbddf7, server=jenkins-hbase4.apache.org,44049,1689963263942 in 917 msec 2023-07-21 18:14:35,593 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=2a5ec5469486ef5b01d5318bdbcbddf7, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46437,1689963263715; forceNewPlan=false, retain=false 2023-07-21 18:14:35,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure.ProcedureSyncWait(216): waitFor pid=75 2023-07-21 18:14:35,743 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=2a5ec5469486ef5b01d5318bdbcbddf7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:35,744 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689963275743"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963275743"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963275743"}]},"ts":"1689963275743"} 2023-07-21 18:14:35,748 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=75, state=RUNNABLE; OpenRegionProcedure 2a5ec5469486ef5b01d5318bdbcbddf7, server=jenkins-hbase4.apache.org,46437,1689963263715}] 2023-07-21 18:14:35,925 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7. 2023-07-21 18:14:35,925 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2a5ec5469486ef5b01d5318bdbcbddf7, NAME => 'hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7.', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:14:35,926 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 2a5ec5469486ef5b01d5318bdbcbddf7 2023-07-21 18:14:35,926 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:35,926 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2a5ec5469486ef5b01d5318bdbcbddf7 2023-07-21 18:14:35,926 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2a5ec5469486ef5b01d5318bdbcbddf7 2023-07-21 18:14:35,935 INFO [StoreOpener-2a5ec5469486ef5b01d5318bdbcbddf7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 2a5ec5469486ef5b01d5318bdbcbddf7 2023-07-21 18:14:35,937 DEBUG [StoreOpener-2a5ec5469486ef5b01d5318bdbcbddf7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/namespace/2a5ec5469486ef5b01d5318bdbcbddf7/info 2023-07-21 18:14:35,937 DEBUG [StoreOpener-2a5ec5469486ef5b01d5318bdbcbddf7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/namespace/2a5ec5469486ef5b01d5318bdbcbddf7/info 2023-07-21 18:14:35,937 INFO [StoreOpener-2a5ec5469486ef5b01d5318bdbcbddf7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2a5ec5469486ef5b01d5318bdbcbddf7 columnFamilyName info 2023-07-21 18:14:35,951 DEBUG [StoreOpener-2a5ec5469486ef5b01d5318bdbcbddf7-1] regionserver.HStore(539): loaded hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/namespace/2a5ec5469486ef5b01d5318bdbcbddf7/info/3771d3be9d3a40e8b355391ec4c75585 2023-07-21 18:14:35,951 INFO [StoreOpener-2a5ec5469486ef5b01d5318bdbcbddf7-1] regionserver.HStore(310): Store=2a5ec5469486ef5b01d5318bdbcbddf7/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:35,952 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/namespace/2a5ec5469486ef5b01d5318bdbcbddf7 2023-07-21 18:14:35,953 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/namespace/2a5ec5469486ef5b01d5318bdbcbddf7 2023-07-21 18:14:35,957 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2a5ec5469486ef5b01d5318bdbcbddf7 2023-07-21 18:14:35,958 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2a5ec5469486ef5b01d5318bdbcbddf7; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10954587520, jitterRate=0.020225465297698975}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:35,959 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2a5ec5469486ef5b01d5318bdbcbddf7: 2023-07-21 18:14:35,959 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7., pid=80, masterSystemTime=1689963275901 2023-07-21 18:14:35,961 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7. 2023-07-21 18:14:35,961 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7. 2023-07-21 18:14:35,962 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=2a5ec5469486ef5b01d5318bdbcbddf7, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:35,962 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689963275962"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963275962"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963275962"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963275962"}]},"ts":"1689963275962"} 2023-07-21 18:14:35,965 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=75 2023-07-21 18:14:35,966 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=75, state=SUCCESS; OpenRegionProcedure 2a5ec5469486ef5b01d5318bdbcbddf7, server=jenkins-hbase4.apache.org,46437,1689963263715 in 215 msec 2023-07-21 18:14:35,967 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=75, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=2a5ec5469486ef5b01d5318bdbcbddf7, REOPEN/MOVE in 1.3010 sec 2023-07-21 18:14:36,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41863,1689963267427, jenkins-hbase4.apache.org,43419,1689963263425, jenkins-hbase4.apache.org,44049,1689963263942] are moved back to default 2023-07-21 18:14:36,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-21 18:14:36,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:36,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:36,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:36,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-21 18:14:36,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:36,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 18:14:36,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-21 18:14:36,682 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 18:14:36,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 81 2023-07-21 18:14:36,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-21 18:14:36,685 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:36,686 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 18:14:36,686 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:36,687 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:36,693 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 18:14:36,695 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testFailRemoveGroup/7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:36,696 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testFailRemoveGroup/7df9ad7e2517b7ed634e56df62a8e4ba empty. 2023-07-21 18:14:36,696 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testFailRemoveGroup/7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:36,696 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-21 18:14:36,719 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-21 18:14:36,721 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7df9ad7e2517b7ed634e56df62a8e4ba, NAME => 'Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp 2023-07-21 18:14:36,741 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:36,741 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing 7df9ad7e2517b7ed634e56df62a8e4ba, disabling compactions & flushes 2023-07-21 18:14:36,741 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. 2023-07-21 18:14:36,741 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. 2023-07-21 18:14:36,741 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. after waiting 0 ms 2023-07-21 18:14:36,741 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. 2023-07-21 18:14:36,741 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. 2023-07-21 18:14:36,741 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for 7df9ad7e2517b7ed634e56df62a8e4ba: 2023-07-21 18:14:36,744 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 18:14:36,746 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689963276746"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963276746"}]},"ts":"1689963276746"} 2023-07-21 18:14:36,748 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 18:14:36,749 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 18:14:36,749 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963276749"}]},"ts":"1689963276749"} 2023-07-21 18:14:36,750 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-21 18:14:36,756 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7df9ad7e2517b7ed634e56df62a8e4ba, ASSIGN}] 2023-07-21 18:14:36,758 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7df9ad7e2517b7ed634e56df62a8e4ba, ASSIGN 2023-07-21 18:14:36,759 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7df9ad7e2517b7ed634e56df62a8e4ba, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46437,1689963263715; forceNewPlan=false, retain=false 2023-07-21 18:14:36,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-21 18:14:36,911 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=7df9ad7e2517b7ed634e56df62a8e4ba, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:36,911 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689963276910"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963276910"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963276910"}]},"ts":"1689963276910"} 2023-07-21 18:14:36,913 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=82, state=RUNNABLE; OpenRegionProcedure 7df9ad7e2517b7ed634e56df62a8e4ba, server=jenkins-hbase4.apache.org,46437,1689963263715}] 2023-07-21 18:14:36,987 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-21 18:14:37,029 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 18:14:37,069 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. 2023-07-21 18:14:37,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7df9ad7e2517b7ed634e56df62a8e4ba, NAME => 'Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba.', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:14:37,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:37,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:37,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:37,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:37,083 INFO [StoreOpener-7df9ad7e2517b7ed634e56df62a8e4ba-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:37,089 DEBUG [StoreOpener-7df9ad7e2517b7ed634e56df62a8e4ba-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testFailRemoveGroup/7df9ad7e2517b7ed634e56df62a8e4ba/f 2023-07-21 18:14:37,089 DEBUG [StoreOpener-7df9ad7e2517b7ed634e56df62a8e4ba-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testFailRemoveGroup/7df9ad7e2517b7ed634e56df62a8e4ba/f 2023-07-21 18:14:37,090 INFO [StoreOpener-7df9ad7e2517b7ed634e56df62a8e4ba-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7df9ad7e2517b7ed634e56df62a8e4ba columnFamilyName f 2023-07-21 18:14:37,090 INFO [StoreOpener-7df9ad7e2517b7ed634e56df62a8e4ba-1] regionserver.HStore(310): Store=7df9ad7e2517b7ed634e56df62a8e4ba/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:37,092 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testFailRemoveGroup/7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:37,092 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testFailRemoveGroup/7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:37,096 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:37,101 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testFailRemoveGroup/7df9ad7e2517b7ed634e56df62a8e4ba/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:37,102 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7df9ad7e2517b7ed634e56df62a8e4ba; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9791123360, jitterRate=-0.08813057839870453}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:37,102 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7df9ad7e2517b7ed634e56df62a8e4ba: 2023-07-21 18:14:37,103 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba., pid=83, masterSystemTime=1689963277065 2023-07-21 18:14:37,105 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. 2023-07-21 18:14:37,106 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. 2023-07-21 18:14:37,106 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=7df9ad7e2517b7ed634e56df62a8e4ba, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:37,106 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689963277106"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963277106"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963277106"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963277106"}]},"ts":"1689963277106"} 2023-07-21 18:14:37,110 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=82 2023-07-21 18:14:37,110 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=82, state=SUCCESS; OpenRegionProcedure 7df9ad7e2517b7ed634e56df62a8e4ba, server=jenkins-hbase4.apache.org,46437,1689963263715 in 195 msec 2023-07-21 18:14:37,112 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-21 18:14:37,112 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7df9ad7e2517b7ed634e56df62a8e4ba, ASSIGN in 354 msec 2023-07-21 18:14:37,113 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 18:14:37,113 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963277113"}]},"ts":"1689963277113"} 2023-07-21 18:14:37,115 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-21 18:14:37,118 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 18:14:37,120 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 440 msec 2023-07-21 18:14:37,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-21 18:14:37,288 INFO [Listener at localhost/36435] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 81 completed 2023-07-21 18:14:37,289 DEBUG [Listener at localhost/36435] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-21 18:14:37,289 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:37,290 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44049] ipc.CallRunner(144): callId: 278 service: ClientService methodName: Scan size: 96 connection: 172.31.14.131:39880 deadline: 1689963337290, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46437 startCode=1689963263715. As of locationSeqNum=97. 2023-07-21 18:14:37,391 DEBUG [hconnection-0x576859ba-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 18:14:37,393 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45514, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 18:14:37,404 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-21 18:14:37,404 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:37,404 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-21 18:14:37,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-21 18:14:37,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:37,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 18:14:37,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:37,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:37,413 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-21 18:14:37,413 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(345): Moving region 7df9ad7e2517b7ed634e56df62a8e4ba to RSGroup bar 2023-07-21 18:14:37,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 18:14:37,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 18:14:37,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 18:14:37,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 18:14:37,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-21 18:14:37,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 18:14:37,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7df9ad7e2517b7ed634e56df62a8e4ba, REOPEN/MOVE 2023-07-21 18:14:37,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-21 18:14:37,416 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7df9ad7e2517b7ed634e56df62a8e4ba, REOPEN/MOVE 2023-07-21 18:14:37,417 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=7df9ad7e2517b7ed634e56df62a8e4ba, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:37,417 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689963277417"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963277417"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963277417"}]},"ts":"1689963277417"} 2023-07-21 18:14:37,421 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure 7df9ad7e2517b7ed634e56df62a8e4ba, server=jenkins-hbase4.apache.org,46437,1689963263715}] 2023-07-21 18:14:37,576 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:37,578 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7df9ad7e2517b7ed634e56df62a8e4ba, disabling compactions & flushes 2023-07-21 18:14:37,578 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. 2023-07-21 18:14:37,578 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. 2023-07-21 18:14:37,578 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. after waiting 0 ms 2023-07-21 18:14:37,578 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. 2023-07-21 18:14:37,582 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testFailRemoveGroup/7df9ad7e2517b7ed634e56df62a8e4ba/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 18:14:37,584 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. 2023-07-21 18:14:37,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7df9ad7e2517b7ed634e56df62a8e4ba: 2023-07-21 18:14:37,584 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 7df9ad7e2517b7ed634e56df62a8e4ba move to jenkins-hbase4.apache.org,44049,1689963263942 record at close sequenceid=2 2023-07-21 18:14:37,586 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:37,587 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=7df9ad7e2517b7ed634e56df62a8e4ba, regionState=CLOSED 2023-07-21 18:14:37,587 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689963277587"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963277587"}]},"ts":"1689963277587"} 2023-07-21 18:14:37,590 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-21 18:14:37,590 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure 7df9ad7e2517b7ed634e56df62a8e4ba, server=jenkins-hbase4.apache.org,46437,1689963263715 in 167 msec 2023-07-21 18:14:37,591 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7df9ad7e2517b7ed634e56df62a8e4ba, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44049,1689963263942; forceNewPlan=false, retain=false 2023-07-21 18:14:37,741 INFO [jenkins-hbase4:45593] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 18:14:37,741 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=7df9ad7e2517b7ed634e56df62a8e4ba, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:37,741 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689963277741"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963277741"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963277741"}]},"ts":"1689963277741"} 2023-07-21 18:14:37,743 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=84, state=RUNNABLE; OpenRegionProcedure 7df9ad7e2517b7ed634e56df62a8e4ba, server=jenkins-hbase4.apache.org,44049,1689963263942}] 2023-07-21 18:14:37,900 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. 2023-07-21 18:14:37,900 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7df9ad7e2517b7ed634e56df62a8e4ba, NAME => 'Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba.', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:14:37,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:37,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:37,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:37,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:37,904 INFO [StoreOpener-7df9ad7e2517b7ed634e56df62a8e4ba-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:37,905 DEBUG [StoreOpener-7df9ad7e2517b7ed634e56df62a8e4ba-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testFailRemoveGroup/7df9ad7e2517b7ed634e56df62a8e4ba/f 2023-07-21 18:14:37,905 DEBUG [StoreOpener-7df9ad7e2517b7ed634e56df62a8e4ba-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testFailRemoveGroup/7df9ad7e2517b7ed634e56df62a8e4ba/f 2023-07-21 18:14:37,906 INFO [StoreOpener-7df9ad7e2517b7ed634e56df62a8e4ba-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7df9ad7e2517b7ed634e56df62a8e4ba columnFamilyName f 2023-07-21 18:14:37,907 INFO [StoreOpener-7df9ad7e2517b7ed634e56df62a8e4ba-1] regionserver.HStore(310): Store=7df9ad7e2517b7ed634e56df62a8e4ba/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:37,908 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testFailRemoveGroup/7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:37,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testFailRemoveGroup/7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:37,913 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:37,914 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7df9ad7e2517b7ed634e56df62a8e4ba; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9770871040, jitterRate=-0.09001672267913818}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:37,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7df9ad7e2517b7ed634e56df62a8e4ba: 2023-07-21 18:14:37,915 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba., pid=86, masterSystemTime=1689963277895 2023-07-21 18:14:37,917 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. 2023-07-21 18:14:37,917 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. 2023-07-21 18:14:37,917 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=7df9ad7e2517b7ed634e56df62a8e4ba, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:37,918 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689963277917"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963277917"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963277917"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963277917"}]},"ts":"1689963277917"} 2023-07-21 18:14:37,921 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=84 2023-07-21 18:14:37,921 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=84, state=SUCCESS; OpenRegionProcedure 7df9ad7e2517b7ed634e56df62a8e4ba, server=jenkins-hbase4.apache.org,44049,1689963263942 in 176 msec 2023-07-21 18:14:37,923 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7df9ad7e2517b7ed634e56df62a8e4ba, REOPEN/MOVE in 507 msec 2023-07-21 18:14:37,977 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testFailRemoveGroup' 2023-07-21 18:14:38,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure.ProcedureSyncWait(216): waitFor pid=84 2023-07-21 18:14:38,416 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-21 18:14:38,417 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:38,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:38,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:38,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-21 18:14:38,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:38,424 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-21 18:14:38,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:38,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 288 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:53692 deadline: 1689964478424, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-21 18:14:38,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44049, jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419] to rsgroup default 2023-07-21 18:14:38,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:38,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 290 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:53692 deadline: 1689964478425, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-21 18:14:38,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-21 18:14:38,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:38,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 18:14:38,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:38,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:38,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-21 18:14:38,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(345): Moving region 7df9ad7e2517b7ed634e56df62a8e4ba to RSGroup default 2023-07-21 18:14:38,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7df9ad7e2517b7ed634e56df62a8e4ba, REOPEN/MOVE 2023-07-21 18:14:38,435 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-21 18:14:38,436 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7df9ad7e2517b7ed634e56df62a8e4ba, REOPEN/MOVE 2023-07-21 18:14:38,437 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=7df9ad7e2517b7ed634e56df62a8e4ba, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:38,437 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689963278437"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963278437"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963278437"}]},"ts":"1689963278437"} 2023-07-21 18:14:38,441 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure 7df9ad7e2517b7ed634e56df62a8e4ba, server=jenkins-hbase4.apache.org,44049,1689963263942}] 2023-07-21 18:14:38,594 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:38,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7df9ad7e2517b7ed634e56df62a8e4ba, disabling compactions & flushes 2023-07-21 18:14:38,597 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. 2023-07-21 18:14:38,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. 2023-07-21 18:14:38,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. after waiting 0 ms 2023-07-21 18:14:38,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. 2023-07-21 18:14:38,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testFailRemoveGroup/7df9ad7e2517b7ed634e56df62a8e4ba/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 18:14:38,602 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. 2023-07-21 18:14:38,602 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7df9ad7e2517b7ed634e56df62a8e4ba: 2023-07-21 18:14:38,602 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 7df9ad7e2517b7ed634e56df62a8e4ba move to jenkins-hbase4.apache.org,46437,1689963263715 record at close sequenceid=5 2023-07-21 18:14:38,604 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:38,605 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=7df9ad7e2517b7ed634e56df62a8e4ba, regionState=CLOSED 2023-07-21 18:14:38,605 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689963278605"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963278605"}]},"ts":"1689963278605"} 2023-07-21 18:14:38,609 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-21 18:14:38,609 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure 7df9ad7e2517b7ed634e56df62a8e4ba, server=jenkins-hbase4.apache.org,44049,1689963263942 in 169 msec 2023-07-21 18:14:38,611 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7df9ad7e2517b7ed634e56df62a8e4ba, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46437,1689963263715; forceNewPlan=false, retain=false 2023-07-21 18:14:38,761 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=7df9ad7e2517b7ed634e56df62a8e4ba, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:38,762 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689963278761"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963278761"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963278761"}]},"ts":"1689963278761"} 2023-07-21 18:14:38,764 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure 7df9ad7e2517b7ed634e56df62a8e4ba, server=jenkins-hbase4.apache.org,46437,1689963263715}] 2023-07-21 18:14:38,921 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. 2023-07-21 18:14:38,921 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7df9ad7e2517b7ed634e56df62a8e4ba, NAME => 'Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba.', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:14:38,921 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:38,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:38,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:38,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:38,924 INFO [StoreOpener-7df9ad7e2517b7ed634e56df62a8e4ba-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:38,927 DEBUG [StoreOpener-7df9ad7e2517b7ed634e56df62a8e4ba-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testFailRemoveGroup/7df9ad7e2517b7ed634e56df62a8e4ba/f 2023-07-21 18:14:38,928 DEBUG [StoreOpener-7df9ad7e2517b7ed634e56df62a8e4ba-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testFailRemoveGroup/7df9ad7e2517b7ed634e56df62a8e4ba/f 2023-07-21 18:14:38,928 INFO [StoreOpener-7df9ad7e2517b7ed634e56df62a8e4ba-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7df9ad7e2517b7ed634e56df62a8e4ba columnFamilyName f 2023-07-21 18:14:38,929 INFO [StoreOpener-7df9ad7e2517b7ed634e56df62a8e4ba-1] regionserver.HStore(310): Store=7df9ad7e2517b7ed634e56df62a8e4ba/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:38,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testFailRemoveGroup/7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:38,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testFailRemoveGroup/7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:38,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:38,936 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7df9ad7e2517b7ed634e56df62a8e4ba; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11868422560, jitterRate=0.10533298552036285}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:38,936 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7df9ad7e2517b7ed634e56df62a8e4ba: 2023-07-21 18:14:38,937 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba., pid=89, masterSystemTime=1689963278916 2023-07-21 18:14:38,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. 2023-07-21 18:14:38,940 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. 2023-07-21 18:14:38,940 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=7df9ad7e2517b7ed634e56df62a8e4ba, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:38,940 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689963278940"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963278940"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963278940"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963278940"}]},"ts":"1689963278940"} 2023-07-21 18:14:38,945 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-21 18:14:38,945 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure 7df9ad7e2517b7ed634e56df62a8e4ba, server=jenkins-hbase4.apache.org,46437,1689963263715 in 178 msec 2023-07-21 18:14:38,949 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7df9ad7e2517b7ed634e56df62a8e4ba, REOPEN/MOVE in 511 msec 2023-07-21 18:14:39,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-21 18:14:39,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-21 18:14:39,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:39,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:39,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:39,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-21 18:14:39,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:39,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 297 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:53692 deadline: 1689964479443, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-21 18:14:39,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44049, jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419] to rsgroup default 2023-07-21 18:14:39,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:39,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 18:14:39,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:39,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:39,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-21 18:14:39,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41863,1689963267427, jenkins-hbase4.apache.org,43419,1689963263425, jenkins-hbase4.apache.org,44049,1689963263942] are moved back to bar 2023-07-21 18:14:39,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-21 18:14:39,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:39,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:39,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:39,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-21 18:14:39,463 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44049] ipc.CallRunner(144): callId: 208 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:42246 deadline: 1689963339463, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46437 startCode=1689963263715. As of locationSeqNum=6. 2023-07-21 18:14:39,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:39,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:39,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 18:14:39,581 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:14:39,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:39,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:39,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:39,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:39,802 INFO [Listener at localhost/36435] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-21 18:14:39,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-21 18:14:39,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-21 18:14:39,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-21 18:14:39,813 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963279813"}]},"ts":"1689963279813"} 2023-07-21 18:14:39,815 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-21 18:14:39,817 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-21 18:14:39,818 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7df9ad7e2517b7ed634e56df62a8e4ba, UNASSIGN}] 2023-07-21 18:14:39,820 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7df9ad7e2517b7ed634e56df62a8e4ba, UNASSIGN 2023-07-21 18:14:39,821 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=7df9ad7e2517b7ed634e56df62a8e4ba, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:39,822 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689963279821"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963279821"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963279821"}]},"ts":"1689963279821"} 2023-07-21 18:14:39,823 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE; CloseRegionProcedure 7df9ad7e2517b7ed634e56df62a8e4ba, server=jenkins-hbase4.apache.org,46437,1689963263715}] 2023-07-21 18:14:39,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-21 18:14:39,976 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:39,977 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7df9ad7e2517b7ed634e56df62a8e4ba, disabling compactions & flushes 2023-07-21 18:14:39,977 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. 2023-07-21 18:14:39,977 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. 2023-07-21 18:14:39,977 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. after waiting 0 ms 2023-07-21 18:14:39,977 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. 2023-07-21 18:14:39,983 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testFailRemoveGroup/7df9ad7e2517b7ed634e56df62a8e4ba/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-21 18:14:39,984 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba. 2023-07-21 18:14:39,984 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7df9ad7e2517b7ed634e56df62a8e4ba: 2023-07-21 18:14:39,986 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:39,987 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=7df9ad7e2517b7ed634e56df62a8e4ba, regionState=CLOSED 2023-07-21 18:14:39,987 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689963279986"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963279986"}]},"ts":"1689963279986"} 2023-07-21 18:14:39,995 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-21 18:14:39,995 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; CloseRegionProcedure 7df9ad7e2517b7ed634e56df62a8e4ba, server=jenkins-hbase4.apache.org,46437,1689963263715 in 165 msec 2023-07-21 18:14:39,998 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-21 18:14:39,998 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=7df9ad7e2517b7ed634e56df62a8e4ba, UNASSIGN in 177 msec 2023-07-21 18:14:40,000 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963280000"}]},"ts":"1689963280000"} 2023-07-21 18:14:40,001 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-21 18:14:40,003 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-21 18:14:40,009 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 198 msec 2023-07-21 18:14:40,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-21 18:14:40,115 INFO [Listener at localhost/36435] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 90 completed 2023-07-21 18:14:40,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-21 18:14:40,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 18:14:40,119 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 18:14:40,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-21 18:14:40,119 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=93, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 18:14:40,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:40,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:40,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:14:40,124 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testFailRemoveGroup/7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:40,126 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testFailRemoveGroup/7df9ad7e2517b7ed634e56df62a8e4ba/f, FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testFailRemoveGroup/7df9ad7e2517b7ed634e56df62a8e4ba/recovered.edits] 2023-07-21 18:14:40,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-21 18:14:40,132 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testFailRemoveGroup/7df9ad7e2517b7ed634e56df62a8e4ba/recovered.edits/10.seqid to hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/archive/data/default/Group_testFailRemoveGroup/7df9ad7e2517b7ed634e56df62a8e4ba/recovered.edits/10.seqid 2023-07-21 18:14:40,132 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testFailRemoveGroup/7df9ad7e2517b7ed634e56df62a8e4ba 2023-07-21 18:14:40,133 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-21 18:14:40,135 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=93, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 18:14:40,137 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-21 18:14:40,139 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-21 18:14:40,140 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=93, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 18:14:40,140 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-21 18:14:40,140 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689963280140"}]},"ts":"9223372036854775807"} 2023-07-21 18:14:40,142 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 18:14:40,142 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 7df9ad7e2517b7ed634e56df62a8e4ba, NAME => 'Group_testFailRemoveGroup,,1689963276678.7df9ad7e2517b7ed634e56df62a8e4ba.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 18:14:40,142 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-21 18:14:40,142 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689963280142"}]},"ts":"9223372036854775807"} 2023-07-21 18:14:40,144 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-21 18:14:40,147 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=93, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 18:14:40,148 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 31 msec 2023-07-21 18:14:40,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-21 18:14:40,229 INFO [Listener at localhost/36435] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-21 18:14:40,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:40,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:40,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:14:40,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:14:40,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:40,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 18:14:40,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:40,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 18:14:40,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:40,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 18:14:40,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:14:40,249 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 18:14:40,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 18:14:40,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:40,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:40,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:14:40,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:14:40,259 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:40,259 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:40,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45593] to rsgroup master 2023-07-21 18:14:40,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:40,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 345 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53692 deadline: 1689964480261, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. 2023-07-21 18:14:40,262 WARN [Listener at localhost/36435] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 18:14:40,263 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:40,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:40,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:40,264 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419, jenkins-hbase4.apache.org:44049, jenkins-hbase4.apache.org:46437], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 18:14:40,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:14:40,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:40,285 INFO [Listener at localhost/36435] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=521 (was 507) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966-prefix:jenkins-hbase4.apache.org,46437,1689963263715.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1274896498-172.31.14.131-1689963257910:blk_1073741860_1036, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x54c19f7b-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_147856072_17 at /127.0.0.1:58152 [Receiving block BP-1274896498-172.31.14.131-1689963257910:blk_1073741860_1036] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x23967352-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_147856072_17 at /127.0.0.1:49502 [Receiving block BP-1274896498-172.31.14.131-1689963257910:blk_1073741860_1036] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_147856072_17 at /127.0.0.1:58112 [Waiting for operation #11] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x576859ba-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x54c19f7b-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x54c19f7b-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1222840337_17 at /127.0.0.1:49512 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x23967352-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x23967352-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_147856072_17 at /127.0.0.1:47960 [Receiving block BP-1274896498-172.31.14.131-1689963257910:blk_1073741860_1036] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x54c19f7b-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1222840337_17 at /127.0.0.1:47984 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/cluster_3aa84d3a-0a47-1c83-a7d0-4c79af33c847/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/cluster_3aa84d3a-0a47-1c83-a7d0-4c79af33c847/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x23967352-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x54c19f7b-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x23967352-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/cluster_3aa84d3a-0a47-1c83-a7d0-4c79af33c847/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1274896498-172.31.14.131-1689963257910:blk_1073741860_1036, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x23967352-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/cluster_3aa84d3a-0a47-1c83-a7d0-4c79af33c847/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1274896498-172.31.14.131-1689963257910:blk_1073741860_1036, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=817 (was 813) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=636 (was 613) - SystemLoadAverage LEAK? -, ProcessCount=174 (was 174), AvailableMemoryMB=7624 (was 8090) 2023-07-21 18:14:40,289 WARN [Listener at localhost/36435] hbase.ResourceChecker(130): Thread=521 is superior to 500 2023-07-21 18:14:40,307 INFO [Listener at localhost/36435] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=521, OpenFileDescriptor=817, MaxFileDescriptor=60000, SystemLoadAverage=636, ProcessCount=174, AvailableMemoryMB=7624 2023-07-21 18:14:40,307 WARN [Listener at localhost/36435] hbase.ResourceChecker(130): Thread=521 is superior to 500 2023-07-21 18:14:40,307 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-21 18:14:40,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:40,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:40,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:14:40,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:14:40,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:40,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 18:14:40,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:40,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 18:14:40,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:40,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 18:14:40,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:14:40,327 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 18:14:40,327 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 18:14:40,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:40,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:40,332 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:14:40,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:14:40,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:40,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:40,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45593] to rsgroup master 2023-07-21 18:14:40,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:40,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 373 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53692 deadline: 1689964480339, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. 2023-07-21 18:14:40,340 WARN [Listener at localhost/36435] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 18:14:40,345 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:40,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:40,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:40,346 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419, jenkins-hbase4.apache.org:44049, jenkins-hbase4.apache.org:46437], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 18:14:40,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:14:40,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:40,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:14:40,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:40,348 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_2090606750 2023-07-21 18:14:40,350 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2090606750 2023-07-21 18:14:40,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:40,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:40,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:40,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:14:40,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:40,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:40,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41863] to rsgroup Group_testMultiTableMove_2090606750 2023-07-21 18:14:40,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:40,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2090606750 2023-07-21 18:14:40,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:40,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:40,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 18:14:40,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41863,1689963267427] are moved back to default 2023-07-21 18:14:40,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_2090606750 2023-07-21 18:14:40,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:40,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:40,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:40,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_2090606750 2023-07-21 18:14:40,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:40,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 18:14:40,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 18:14:40,378 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 18:14:40,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 94 2023-07-21 18:14:40,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-21 18:14:40,383 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:40,383 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2090606750 2023-07-21 18:14:40,384 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:40,384 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:40,391 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 18:14:40,393 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/GrouptestMultiTableMoveA/89588fea88f911077d66bb065dfc3cf5 2023-07-21 18:14:40,394 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/GrouptestMultiTableMoveA/89588fea88f911077d66bb065dfc3cf5 empty. 2023-07-21 18:14:40,395 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/GrouptestMultiTableMoveA/89588fea88f911077d66bb065dfc3cf5 2023-07-21 18:14:40,395 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-21 18:14:40,430 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-21 18:14:40,432 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 89588fea88f911077d66bb065dfc3cf5, NAME => 'GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp 2023-07-21 18:14:40,460 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:40,461 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 89588fea88f911077d66bb065dfc3cf5, disabling compactions & flushes 2023-07-21 18:14:40,461 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5. 2023-07-21 18:14:40,461 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5. 2023-07-21 18:14:40,461 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5. after waiting 0 ms 2023-07-21 18:14:40,461 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5. 2023-07-21 18:14:40,461 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5. 2023-07-21 18:14:40,461 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 89588fea88f911077d66bb065dfc3cf5: 2023-07-21 18:14:40,464 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 18:14:40,465 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689963280464"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963280464"}]},"ts":"1689963280464"} 2023-07-21 18:14:40,466 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 18:14:40,467 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 18:14:40,467 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963280467"}]},"ts":"1689963280467"} 2023-07-21 18:14:40,468 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-21 18:14:40,472 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 18:14:40,473 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 18:14:40,473 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 18:14:40,473 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 18:14:40,473 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 18:14:40,473 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=89588fea88f911077d66bb065dfc3cf5, ASSIGN}] 2023-07-21 18:14:40,475 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=89588fea88f911077d66bb065dfc3cf5, ASSIGN 2023-07-21 18:14:40,477 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=89588fea88f911077d66bb065dfc3cf5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43419,1689963263425; forceNewPlan=false, retain=false 2023-07-21 18:14:40,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-21 18:14:40,627 INFO [jenkins-hbase4:45593] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 18:14:40,629 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=89588fea88f911077d66bb065dfc3cf5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:40,629 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689963280628"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963280628"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963280628"}]},"ts":"1689963280628"} 2023-07-21 18:14:40,631 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure 89588fea88f911077d66bb065dfc3cf5, server=jenkins-hbase4.apache.org,43419,1689963263425}] 2023-07-21 18:14:40,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-21 18:14:40,789 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5. 2023-07-21 18:14:40,789 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 89588fea88f911077d66bb065dfc3cf5, NAME => 'GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5.', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:14:40,789 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 89588fea88f911077d66bb065dfc3cf5 2023-07-21 18:14:40,789 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:40,789 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 89588fea88f911077d66bb065dfc3cf5 2023-07-21 18:14:40,789 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 89588fea88f911077d66bb065dfc3cf5 2023-07-21 18:14:40,791 INFO [StoreOpener-89588fea88f911077d66bb065dfc3cf5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 89588fea88f911077d66bb065dfc3cf5 2023-07-21 18:14:40,792 DEBUG [StoreOpener-89588fea88f911077d66bb065dfc3cf5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/GrouptestMultiTableMoveA/89588fea88f911077d66bb065dfc3cf5/f 2023-07-21 18:14:40,792 DEBUG [StoreOpener-89588fea88f911077d66bb065dfc3cf5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/GrouptestMultiTableMoveA/89588fea88f911077d66bb065dfc3cf5/f 2023-07-21 18:14:40,792 INFO [StoreOpener-89588fea88f911077d66bb065dfc3cf5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 89588fea88f911077d66bb065dfc3cf5 columnFamilyName f 2023-07-21 18:14:40,793 INFO [StoreOpener-89588fea88f911077d66bb065dfc3cf5-1] regionserver.HStore(310): Store=89588fea88f911077d66bb065dfc3cf5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:40,794 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/GrouptestMultiTableMoveA/89588fea88f911077d66bb065dfc3cf5 2023-07-21 18:14:40,794 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/GrouptestMultiTableMoveA/89588fea88f911077d66bb065dfc3cf5 2023-07-21 18:14:40,797 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 89588fea88f911077d66bb065dfc3cf5 2023-07-21 18:14:40,800 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/GrouptestMultiTableMoveA/89588fea88f911077d66bb065dfc3cf5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:40,800 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 89588fea88f911077d66bb065dfc3cf5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10093988960, jitterRate=-0.05992402136325836}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:40,801 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 89588fea88f911077d66bb065dfc3cf5: 2023-07-21 18:14:40,801 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5., pid=96, masterSystemTime=1689963280785 2023-07-21 18:14:40,804 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5. 2023-07-21 18:14:40,804 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5. 2023-07-21 18:14:40,807 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=89588fea88f911077d66bb065dfc3cf5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:40,807 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689963280807"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963280807"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963280807"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963280807"}]},"ts":"1689963280807"} 2023-07-21 18:14:40,810 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-21 18:14:40,810 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure 89588fea88f911077d66bb065dfc3cf5, server=jenkins-hbase4.apache.org,43419,1689963263425 in 177 msec 2023-07-21 18:14:40,814 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-21 18:14:40,815 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=89588fea88f911077d66bb065dfc3cf5, ASSIGN in 337 msec 2023-07-21 18:14:40,815 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 18:14:40,816 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963280816"}]},"ts":"1689963280816"} 2023-07-21 18:14:40,817 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-21 18:14:40,820 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 18:14:40,821 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 445 msec 2023-07-21 18:14:40,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-21 18:14:40,983 INFO [Listener at localhost/36435] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 94 completed 2023-07-21 18:14:40,983 DEBUG [Listener at localhost/36435] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-21 18:14:40,983 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:40,988 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-21 18:14:40,988 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:40,988 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-21 18:14:40,991 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 18:14:40,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 18:14:40,995 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 18:14:40,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 97 2023-07-21 18:14:40,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-21 18:14:40,998 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:40,998 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2090606750 2023-07-21 18:14:40,999 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:41,000 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:41,002 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 18:14:41,004 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/GrouptestMultiTableMoveB/4b8dbda31aa7e5a0d3d8e4a1831df911 2023-07-21 18:14:41,005 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/GrouptestMultiTableMoveB/4b8dbda31aa7e5a0d3d8e4a1831df911 empty. 2023-07-21 18:14:41,005 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/GrouptestMultiTableMoveB/4b8dbda31aa7e5a0d3d8e4a1831df911 2023-07-21 18:14:41,006 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-21 18:14:41,028 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-21 18:14:41,030 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4b8dbda31aa7e5a0d3d8e4a1831df911, NAME => 'GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp 2023-07-21 18:14:41,052 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:41,052 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 4b8dbda31aa7e5a0d3d8e4a1831df911, disabling compactions & flushes 2023-07-21 18:14:41,052 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911. 2023-07-21 18:14:41,052 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911. 2023-07-21 18:14:41,052 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911. after waiting 0 ms 2023-07-21 18:14:41,052 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911. 2023-07-21 18:14:41,052 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911. 2023-07-21 18:14:41,052 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 4b8dbda31aa7e5a0d3d8e4a1831df911: 2023-07-21 18:14:41,056 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 18:14:41,057 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689963281057"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963281057"}]},"ts":"1689963281057"} 2023-07-21 18:14:41,059 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 18:14:41,060 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 18:14:41,060 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963281060"}]},"ts":"1689963281060"} 2023-07-21 18:14:41,068 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-21 18:14:41,072 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 18:14:41,072 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 18:14:41,072 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 18:14:41,072 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 18:14:41,072 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 18:14:41,073 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=4b8dbda31aa7e5a0d3d8e4a1831df911, ASSIGN}] 2023-07-21 18:14:41,075 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=4b8dbda31aa7e5a0d3d8e4a1831df911, ASSIGN 2023-07-21 18:14:41,077 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=4b8dbda31aa7e5a0d3d8e4a1831df911, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46437,1689963263715; forceNewPlan=false, retain=false 2023-07-21 18:14:41,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-21 18:14:41,227 INFO [jenkins-hbase4:45593] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 18:14:41,228 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=4b8dbda31aa7e5a0d3d8e4a1831df911, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:41,229 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689963281228"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963281228"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963281228"}]},"ts":"1689963281228"} 2023-07-21 18:14:41,231 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure 4b8dbda31aa7e5a0d3d8e4a1831df911, server=jenkins-hbase4.apache.org,46437,1689963263715}] 2023-07-21 18:14:41,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-21 18:14:41,387 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911. 2023-07-21 18:14:41,388 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4b8dbda31aa7e5a0d3d8e4a1831df911, NAME => 'GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911.', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:14:41,388 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 4b8dbda31aa7e5a0d3d8e4a1831df911 2023-07-21 18:14:41,388 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:41,388 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4b8dbda31aa7e5a0d3d8e4a1831df911 2023-07-21 18:14:41,388 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4b8dbda31aa7e5a0d3d8e4a1831df911 2023-07-21 18:14:41,390 INFO [StoreOpener-4b8dbda31aa7e5a0d3d8e4a1831df911-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4b8dbda31aa7e5a0d3d8e4a1831df911 2023-07-21 18:14:41,397 DEBUG [StoreOpener-4b8dbda31aa7e5a0d3d8e4a1831df911-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/GrouptestMultiTableMoveB/4b8dbda31aa7e5a0d3d8e4a1831df911/f 2023-07-21 18:14:41,397 DEBUG [StoreOpener-4b8dbda31aa7e5a0d3d8e4a1831df911-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/GrouptestMultiTableMoveB/4b8dbda31aa7e5a0d3d8e4a1831df911/f 2023-07-21 18:14:41,397 INFO [StoreOpener-4b8dbda31aa7e5a0d3d8e4a1831df911-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4b8dbda31aa7e5a0d3d8e4a1831df911 columnFamilyName f 2023-07-21 18:14:41,398 INFO [StoreOpener-4b8dbda31aa7e5a0d3d8e4a1831df911-1] regionserver.HStore(310): Store=4b8dbda31aa7e5a0d3d8e4a1831df911/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:41,399 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/GrouptestMultiTableMoveB/4b8dbda31aa7e5a0d3d8e4a1831df911 2023-07-21 18:14:41,399 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/GrouptestMultiTableMoveB/4b8dbda31aa7e5a0d3d8e4a1831df911 2023-07-21 18:14:41,403 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4b8dbda31aa7e5a0d3d8e4a1831df911 2023-07-21 18:14:41,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/GrouptestMultiTableMoveB/4b8dbda31aa7e5a0d3d8e4a1831df911/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:41,414 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4b8dbda31aa7e5a0d3d8e4a1831df911; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9624094560, jitterRate=-0.10368634760379791}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:41,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4b8dbda31aa7e5a0d3d8e4a1831df911: 2023-07-21 18:14:41,415 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911., pid=99, masterSystemTime=1689963281384 2023-07-21 18:14:41,417 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911. 2023-07-21 18:14:41,418 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911. 2023-07-21 18:14:41,418 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=4b8dbda31aa7e5a0d3d8e4a1831df911, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:41,418 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689963281418"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963281418"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963281418"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963281418"}]},"ts":"1689963281418"} 2023-07-21 18:14:41,422 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-21 18:14:41,422 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure 4b8dbda31aa7e5a0d3d8e4a1831df911, server=jenkins-hbase4.apache.org,46437,1689963263715 in 189 msec 2023-07-21 18:14:41,424 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-21 18:14:41,424 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=4b8dbda31aa7e5a0d3d8e4a1831df911, ASSIGN in 349 msec 2023-07-21 18:14:41,425 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 18:14:41,425 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963281425"}]},"ts":"1689963281425"} 2023-07-21 18:14:41,427 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-21 18:14:41,434 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 18:14:41,435 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 443 msec 2023-07-21 18:14:41,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-21 18:14:41,602 INFO [Listener at localhost/36435] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 97 completed 2023-07-21 18:14:41,602 DEBUG [Listener at localhost/36435] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-21 18:14:41,603 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:41,613 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-21 18:14:41,614 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:41,614 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-21 18:14:41,615 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:41,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-21 18:14:41,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 18:14:41,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-21 18:14:41,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 18:14:41,641 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_2090606750 2023-07-21 18:14:41,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_2090606750 2023-07-21 18:14:41,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:41,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2090606750 2023-07-21 18:14:41,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:41,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:41,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_2090606750 2023-07-21 18:14:41,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(345): Moving region 4b8dbda31aa7e5a0d3d8e4a1831df911 to RSGroup Group_testMultiTableMove_2090606750 2023-07-21 18:14:41,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=4b8dbda31aa7e5a0d3d8e4a1831df911, REOPEN/MOVE 2023-07-21 18:14:41,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_2090606750 2023-07-21 18:14:41,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(345): Moving region 89588fea88f911077d66bb065dfc3cf5 to RSGroup Group_testMultiTableMove_2090606750 2023-07-21 18:14:41,651 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=4b8dbda31aa7e5a0d3d8e4a1831df911, REOPEN/MOVE 2023-07-21 18:14:41,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=89588fea88f911077d66bb065dfc3cf5, REOPEN/MOVE 2023-07-21 18:14:41,652 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=4b8dbda31aa7e5a0d3d8e4a1831df911, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:41,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_2090606750, current retry=0 2023-07-21 18:14:41,653 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689963281652"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963281652"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963281652"}]},"ts":"1689963281652"} 2023-07-21 18:14:41,654 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=89588fea88f911077d66bb065dfc3cf5, REOPEN/MOVE 2023-07-21 18:14:41,655 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=89588fea88f911077d66bb065dfc3cf5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:41,655 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689963281655"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963281655"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963281655"}]},"ts":"1689963281655"} 2023-07-21 18:14:41,655 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=100, state=RUNNABLE; CloseRegionProcedure 4b8dbda31aa7e5a0d3d8e4a1831df911, server=jenkins-hbase4.apache.org,46437,1689963263715}] 2023-07-21 18:14:41,658 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=103, ppid=101, state=RUNNABLE; CloseRegionProcedure 89588fea88f911077d66bb065dfc3cf5, server=jenkins-hbase4.apache.org,43419,1689963263425}] 2023-07-21 18:14:41,807 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4b8dbda31aa7e5a0d3d8e4a1831df911 2023-07-21 18:14:41,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4b8dbda31aa7e5a0d3d8e4a1831df911, disabling compactions & flushes 2023-07-21 18:14:41,809 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911. 2023-07-21 18:14:41,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911. 2023-07-21 18:14:41,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911. after waiting 0 ms 2023-07-21 18:14:41,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911. 2023-07-21 18:14:41,811 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 89588fea88f911077d66bb065dfc3cf5 2023-07-21 18:14:41,812 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 89588fea88f911077d66bb065dfc3cf5, disabling compactions & flushes 2023-07-21 18:14:41,812 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5. 2023-07-21 18:14:41,812 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5. 2023-07-21 18:14:41,812 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5. after waiting 0 ms 2023-07-21 18:14:41,812 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5. 2023-07-21 18:14:41,815 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/GrouptestMultiTableMoveB/4b8dbda31aa7e5a0d3d8e4a1831df911/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 18:14:41,815 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911. 2023-07-21 18:14:41,815 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4b8dbda31aa7e5a0d3d8e4a1831df911: 2023-07-21 18:14:41,815 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 4b8dbda31aa7e5a0d3d8e4a1831df911 move to jenkins-hbase4.apache.org,41863,1689963267427 record at close sequenceid=2 2023-07-21 18:14:41,816 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/GrouptestMultiTableMoveA/89588fea88f911077d66bb065dfc3cf5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 18:14:41,816 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5. 2023-07-21 18:14:41,816 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 89588fea88f911077d66bb065dfc3cf5: 2023-07-21 18:14:41,816 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 89588fea88f911077d66bb065dfc3cf5 move to jenkins-hbase4.apache.org,41863,1689963267427 record at close sequenceid=2 2023-07-21 18:14:41,817 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4b8dbda31aa7e5a0d3d8e4a1831df911 2023-07-21 18:14:41,817 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=4b8dbda31aa7e5a0d3d8e4a1831df911, regionState=CLOSED 2023-07-21 18:14:41,819 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689963281817"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963281817"}]},"ts":"1689963281817"} 2023-07-21 18:14:41,819 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 89588fea88f911077d66bb065dfc3cf5 2023-07-21 18:14:41,819 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=89588fea88f911077d66bb065dfc3cf5, regionState=CLOSED 2023-07-21 18:14:41,819 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689963281819"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963281819"}]},"ts":"1689963281819"} 2023-07-21 18:14:41,822 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=100 2023-07-21 18:14:41,823 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=100, state=SUCCESS; CloseRegionProcedure 4b8dbda31aa7e5a0d3d8e4a1831df911, server=jenkins-hbase4.apache.org,46437,1689963263715 in 165 msec 2023-07-21 18:14:41,823 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=103, resume processing ppid=101 2023-07-21 18:14:41,823 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=103, ppid=101, state=SUCCESS; CloseRegionProcedure 89588fea88f911077d66bb065dfc3cf5, server=jenkins-hbase4.apache.org,43419,1689963263425 in 163 msec 2023-07-21 18:14:41,823 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=4b8dbda31aa7e5a0d3d8e4a1831df911, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41863,1689963267427; forceNewPlan=false, retain=false 2023-07-21 18:14:41,824 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=89588fea88f911077d66bb065dfc3cf5, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41863,1689963267427; forceNewPlan=false, retain=false 2023-07-21 18:14:41,974 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=89588fea88f911077d66bb065dfc3cf5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:41,974 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=4b8dbda31aa7e5a0d3d8e4a1831df911, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:41,974 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689963281974"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963281974"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963281974"}]},"ts":"1689963281974"} 2023-07-21 18:14:41,974 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689963281974"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963281974"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963281974"}]},"ts":"1689963281974"} 2023-07-21 18:14:41,976 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=101, state=RUNNABLE; OpenRegionProcedure 89588fea88f911077d66bb065dfc3cf5, server=jenkins-hbase4.apache.org,41863,1689963267427}] 2023-07-21 18:14:41,977 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=100, state=RUNNABLE; OpenRegionProcedure 4b8dbda31aa7e5a0d3d8e4a1831df911, server=jenkins-hbase4.apache.org,41863,1689963267427}] 2023-07-21 18:14:42,132 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5. 2023-07-21 18:14:42,133 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 89588fea88f911077d66bb065dfc3cf5, NAME => 'GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5.', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:14:42,133 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 89588fea88f911077d66bb065dfc3cf5 2023-07-21 18:14:42,133 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:42,133 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 89588fea88f911077d66bb065dfc3cf5 2023-07-21 18:14:42,133 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 89588fea88f911077d66bb065dfc3cf5 2023-07-21 18:14:42,136 INFO [StoreOpener-89588fea88f911077d66bb065dfc3cf5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 89588fea88f911077d66bb065dfc3cf5 2023-07-21 18:14:42,137 DEBUG [StoreOpener-89588fea88f911077d66bb065dfc3cf5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/GrouptestMultiTableMoveA/89588fea88f911077d66bb065dfc3cf5/f 2023-07-21 18:14:42,137 DEBUG [StoreOpener-89588fea88f911077d66bb065dfc3cf5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/GrouptestMultiTableMoveA/89588fea88f911077d66bb065dfc3cf5/f 2023-07-21 18:14:42,138 INFO [StoreOpener-89588fea88f911077d66bb065dfc3cf5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 89588fea88f911077d66bb065dfc3cf5 columnFamilyName f 2023-07-21 18:14:42,138 INFO [StoreOpener-89588fea88f911077d66bb065dfc3cf5-1] regionserver.HStore(310): Store=89588fea88f911077d66bb065dfc3cf5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:42,140 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/GrouptestMultiTableMoveA/89588fea88f911077d66bb065dfc3cf5 2023-07-21 18:14:42,141 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/GrouptestMultiTableMoveA/89588fea88f911077d66bb065dfc3cf5 2023-07-21 18:14:42,146 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 89588fea88f911077d66bb065dfc3cf5 2023-07-21 18:14:42,147 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 89588fea88f911077d66bb065dfc3cf5; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9978782400, jitterRate=-0.07065346837043762}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:42,147 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 89588fea88f911077d66bb065dfc3cf5: 2023-07-21 18:14:42,149 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5., pid=104, masterSystemTime=1689963282128 2023-07-21 18:14:42,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5. 2023-07-21 18:14:42,152 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5. 2023-07-21 18:14:42,153 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911. 2023-07-21 18:14:42,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4b8dbda31aa7e5a0d3d8e4a1831df911, NAME => 'GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911.', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:14:42,153 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=89588fea88f911077d66bb065dfc3cf5, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:42,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 4b8dbda31aa7e5a0d3d8e4a1831df911 2023-07-21 18:14:42,153 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689963282153"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963282153"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963282153"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963282153"}]},"ts":"1689963282153"} 2023-07-21 18:14:42,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:42,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4b8dbda31aa7e5a0d3d8e4a1831df911 2023-07-21 18:14:42,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4b8dbda31aa7e5a0d3d8e4a1831df911 2023-07-21 18:14:42,156 INFO [StoreOpener-4b8dbda31aa7e5a0d3d8e4a1831df911-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4b8dbda31aa7e5a0d3d8e4a1831df911 2023-07-21 18:14:42,158 DEBUG [StoreOpener-4b8dbda31aa7e5a0d3d8e4a1831df911-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/GrouptestMultiTableMoveB/4b8dbda31aa7e5a0d3d8e4a1831df911/f 2023-07-21 18:14:42,158 DEBUG [StoreOpener-4b8dbda31aa7e5a0d3d8e4a1831df911-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/GrouptestMultiTableMoveB/4b8dbda31aa7e5a0d3d8e4a1831df911/f 2023-07-21 18:14:42,159 INFO [StoreOpener-4b8dbda31aa7e5a0d3d8e4a1831df911-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4b8dbda31aa7e5a0d3d8e4a1831df911 columnFamilyName f 2023-07-21 18:14:42,161 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=101 2023-07-21 18:14:42,161 INFO [StoreOpener-4b8dbda31aa7e5a0d3d8e4a1831df911-1] regionserver.HStore(310): Store=4b8dbda31aa7e5a0d3d8e4a1831df911/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:42,161 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=101, state=SUCCESS; OpenRegionProcedure 89588fea88f911077d66bb065dfc3cf5, server=jenkins-hbase4.apache.org,41863,1689963267427 in 180 msec 2023-07-21 18:14:42,162 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/GrouptestMultiTableMoveB/4b8dbda31aa7e5a0d3d8e4a1831df911 2023-07-21 18:14:42,164 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/GrouptestMultiTableMoveB/4b8dbda31aa7e5a0d3d8e4a1831df911 2023-07-21 18:14:42,164 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=101, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=89588fea88f911077d66bb065dfc3cf5, REOPEN/MOVE in 510 msec 2023-07-21 18:14:42,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4b8dbda31aa7e5a0d3d8e4a1831df911 2023-07-21 18:14:42,169 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4b8dbda31aa7e5a0d3d8e4a1831df911; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9958203840, jitterRate=-0.07256999611854553}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:42,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4b8dbda31aa7e5a0d3d8e4a1831df911: 2023-07-21 18:14:42,170 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911., pid=105, masterSystemTime=1689963282128 2023-07-21 18:14:42,172 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911. 2023-07-21 18:14:42,172 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911. 2023-07-21 18:14:42,172 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=4b8dbda31aa7e5a0d3d8e4a1831df911, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:42,173 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689963282172"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963282172"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963282172"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963282172"}]},"ts":"1689963282172"} 2023-07-21 18:14:42,177 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=100 2023-07-21 18:14:42,177 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=100, state=SUCCESS; OpenRegionProcedure 4b8dbda31aa7e5a0d3d8e4a1831df911, server=jenkins-hbase4.apache.org,41863,1689963267427 in 197 msec 2023-07-21 18:14:42,179 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=4b8dbda31aa7e5a0d3d8e4a1831df911, REOPEN/MOVE in 528 msec 2023-07-21 18:14:42,573 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 18:14:42,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure.ProcedureSyncWait(216): waitFor pid=100 2023-07-21 18:14:42,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_2090606750. 2023-07-21 18:14:42,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:42,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:42,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:42,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-21 18:14:42,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 18:14:42,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-21 18:14:42,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 18:14:42,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:14:42,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:42,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_2090606750 2023-07-21 18:14:42,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:42,679 INFO [Listener at localhost/36435] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-21 18:14:42,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-21 18:14:42,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 18:14:42,691 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963282690"}]},"ts":"1689963282690"} 2023-07-21 18:14:42,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-21 18:14:42,692 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-21 18:14:42,695 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-21 18:14:42,700 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=89588fea88f911077d66bb065dfc3cf5, UNASSIGN}] 2023-07-21 18:14:42,702 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=89588fea88f911077d66bb065dfc3cf5, UNASSIGN 2023-07-21 18:14:42,703 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=89588fea88f911077d66bb065dfc3cf5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:42,703 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689963282702"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963282702"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963282702"}]},"ts":"1689963282702"} 2023-07-21 18:14:42,704 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE; CloseRegionProcedure 89588fea88f911077d66bb065dfc3cf5, server=jenkins-hbase4.apache.org,41863,1689963267427}] 2023-07-21 18:14:42,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-21 18:14:42,857 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 89588fea88f911077d66bb065dfc3cf5 2023-07-21 18:14:42,858 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 89588fea88f911077d66bb065dfc3cf5, disabling compactions & flushes 2023-07-21 18:14:42,858 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5. 2023-07-21 18:14:42,858 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5. 2023-07-21 18:14:42,858 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5. after waiting 0 ms 2023-07-21 18:14:42,858 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5. 2023-07-21 18:14:42,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/GrouptestMultiTableMoveA/89588fea88f911077d66bb065dfc3cf5/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 18:14:42,866 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5. 2023-07-21 18:14:42,866 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 89588fea88f911077d66bb065dfc3cf5: 2023-07-21 18:14:42,869 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 89588fea88f911077d66bb065dfc3cf5 2023-07-21 18:14:42,869 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=89588fea88f911077d66bb065dfc3cf5, regionState=CLOSED 2023-07-21 18:14:42,870 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689963282869"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963282869"}]},"ts":"1689963282869"} 2023-07-21 18:14:42,873 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-21 18:14:42,873 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; CloseRegionProcedure 89588fea88f911077d66bb065dfc3cf5, server=jenkins-hbase4.apache.org,41863,1689963267427 in 167 msec 2023-07-21 18:14:42,875 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=106 2023-07-21 18:14:42,875 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=106, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=89588fea88f911077d66bb065dfc3cf5, UNASSIGN in 177 msec 2023-07-21 18:14:42,876 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963282875"}]},"ts":"1689963282875"} 2023-07-21 18:14:42,877 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-21 18:14:42,879 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-21 18:14:42,882 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 199 msec 2023-07-21 18:14:42,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-21 18:14:42,994 INFO [Listener at localhost/36435] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 106 completed 2023-07-21 18:14:42,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-21 18:14:42,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 18:14:42,997 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 18:14:42,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_2090606750' 2023-07-21 18:14:42,998 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=109, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 18:14:43,000 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:43,000 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2090606750 2023-07-21 18:14:43,001 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:43,001 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:43,002 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/GrouptestMultiTableMoveA/89588fea88f911077d66bb065dfc3cf5 2023-07-21 18:14:43,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-21 18:14:43,004 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/GrouptestMultiTableMoveA/89588fea88f911077d66bb065dfc3cf5/f, FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/GrouptestMultiTableMoveA/89588fea88f911077d66bb065dfc3cf5/recovered.edits] 2023-07-21 18:14:43,010 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/GrouptestMultiTableMoveA/89588fea88f911077d66bb065dfc3cf5/recovered.edits/7.seqid to hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/archive/data/default/GrouptestMultiTableMoveA/89588fea88f911077d66bb065dfc3cf5/recovered.edits/7.seqid 2023-07-21 18:14:43,011 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/GrouptestMultiTableMoveA/89588fea88f911077d66bb065dfc3cf5 2023-07-21 18:14:43,011 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-21 18:14:43,016 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=109, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 18:14:43,018 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-21 18:14:43,020 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-21 18:14:43,021 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=109, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 18:14:43,021 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-21 18:14:43,021 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689963283021"}]},"ts":"9223372036854775807"} 2023-07-21 18:14:43,022 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 18:14:43,022 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 89588fea88f911077d66bb065dfc3cf5, NAME => 'GrouptestMultiTableMoveA,,1689963280374.89588fea88f911077d66bb065dfc3cf5.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 18:14:43,023 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-21 18:14:43,023 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689963283023"}]},"ts":"9223372036854775807"} 2023-07-21 18:14:43,024 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-21 18:14:43,026 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=109, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 18:14:43,027 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 31 msec 2023-07-21 18:14:43,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-21 18:14:43,106 INFO [Listener at localhost/36435] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-21 18:14:43,106 INFO [Listener at localhost/36435] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-21 18:14:43,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-21 18:14:43,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 18:14:43,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-21 18:14:43,111 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963283111"}]},"ts":"1689963283111"} 2023-07-21 18:14:43,113 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-21 18:14:43,121 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-21 18:14:43,122 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=4b8dbda31aa7e5a0d3d8e4a1831df911, UNASSIGN}] 2023-07-21 18:14:43,124 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=4b8dbda31aa7e5a0d3d8e4a1831df911, UNASSIGN 2023-07-21 18:14:43,124 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=4b8dbda31aa7e5a0d3d8e4a1831df911, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:43,124 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689963283124"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963283124"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963283124"}]},"ts":"1689963283124"} 2023-07-21 18:14:43,126 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; CloseRegionProcedure 4b8dbda31aa7e5a0d3d8e4a1831df911, server=jenkins-hbase4.apache.org,41863,1689963267427}] 2023-07-21 18:14:43,212 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-21 18:14:43,278 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4b8dbda31aa7e5a0d3d8e4a1831df911 2023-07-21 18:14:43,279 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4b8dbda31aa7e5a0d3d8e4a1831df911, disabling compactions & flushes 2023-07-21 18:14:43,279 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911. 2023-07-21 18:14:43,279 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911. 2023-07-21 18:14:43,279 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911. after waiting 0 ms 2023-07-21 18:14:43,279 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911. 2023-07-21 18:14:43,283 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/GrouptestMultiTableMoveB/4b8dbda31aa7e5a0d3d8e4a1831df911/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 18:14:43,285 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911. 2023-07-21 18:14:43,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4b8dbda31aa7e5a0d3d8e4a1831df911: 2023-07-21 18:14:43,287 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4b8dbda31aa7e5a0d3d8e4a1831df911 2023-07-21 18:14:43,287 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=4b8dbda31aa7e5a0d3d8e4a1831df911, regionState=CLOSED 2023-07-21 18:14:43,288 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689963283287"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963283287"}]},"ts":"1689963283287"} 2023-07-21 18:14:43,291 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-21 18:14:43,291 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; CloseRegionProcedure 4b8dbda31aa7e5a0d3d8e4a1831df911, server=jenkins-hbase4.apache.org,41863,1689963267427 in 163 msec 2023-07-21 18:14:43,293 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-21 18:14:43,293 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=4b8dbda31aa7e5a0d3d8e4a1831df911, UNASSIGN in 169 msec 2023-07-21 18:14:43,294 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963283294"}]},"ts":"1689963283294"} 2023-07-21 18:14:43,295 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-21 18:14:43,297 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-21 18:14:43,299 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 191 msec 2023-07-21 18:14:43,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-21 18:14:43,414 INFO [Listener at localhost/36435] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 110 completed 2023-07-21 18:14:43,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-21 18:14:43,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 18:14:43,417 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 18:14:43,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_2090606750' 2023-07-21 18:14:43,418 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=113, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 18:14:43,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:43,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2090606750 2023-07-21 18:14:43,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:43,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:43,422 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/GrouptestMultiTableMoveB/4b8dbda31aa7e5a0d3d8e4a1831df911 2023-07-21 18:14:43,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-21 18:14:43,424 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/GrouptestMultiTableMoveB/4b8dbda31aa7e5a0d3d8e4a1831df911/f, FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/GrouptestMultiTableMoveB/4b8dbda31aa7e5a0d3d8e4a1831df911/recovered.edits] 2023-07-21 18:14:43,429 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/GrouptestMultiTableMoveB/4b8dbda31aa7e5a0d3d8e4a1831df911/recovered.edits/7.seqid to hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/archive/data/default/GrouptestMultiTableMoveB/4b8dbda31aa7e5a0d3d8e4a1831df911/recovered.edits/7.seqid 2023-07-21 18:14:43,430 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/GrouptestMultiTableMoveB/4b8dbda31aa7e5a0d3d8e4a1831df911 2023-07-21 18:14:43,430 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-21 18:14:43,432 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=113, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 18:14:43,435 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-21 18:14:43,436 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-21 18:14:43,438 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=113, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 18:14:43,438 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-21 18:14:43,438 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689963283438"}]},"ts":"9223372036854775807"} 2023-07-21 18:14:43,439 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 18:14:43,439 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 4b8dbda31aa7e5a0d3d8e4a1831df911, NAME => 'GrouptestMultiTableMoveB,,1689963280990.4b8dbda31aa7e5a0d3d8e4a1831df911.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 18:14:43,439 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-21 18:14:43,439 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689963283439"}]},"ts":"9223372036854775807"} 2023-07-21 18:14:43,441 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-21 18:14:43,443 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=113, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 18:14:43,444 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 28 msec 2023-07-21 18:14:43,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-21 18:14:43,526 INFO [Listener at localhost/36435] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-21 18:14:43,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:43,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:43,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:14:43,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:14:43,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:43,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41863] to rsgroup default 2023-07-21 18:14:43,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:43,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2090606750 2023-07-21 18:14:43,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:43,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:43,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_2090606750, current retry=0 2023-07-21 18:14:43,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41863,1689963267427] are moved back to Group_testMultiTableMove_2090606750 2023-07-21 18:14:43,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_2090606750 => default 2023-07-21 18:14:43,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:43,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_2090606750 2023-07-21 18:14:43,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:43,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:43,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 18:14:43,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:14:43,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:14:43,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:14:43,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:43,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 18:14:43,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:43,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 18:14:43,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:43,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 18:14:43,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:14:43,555 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 18:14:43,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 18:14:43,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:43,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:43,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:14:43,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:14:43,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:43,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:43,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45593] to rsgroup master 2023-07-21 18:14:43,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:43,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 511 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53692 deadline: 1689964483565, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. 2023-07-21 18:14:43,566 WARN [Listener at localhost/36435] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 18:14:43,568 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:43,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:43,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:43,569 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419, jenkins-hbase4.apache.org:44049, jenkins-hbase4.apache.org:46437], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 18:14:43,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:14:43,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:43,587 INFO [Listener at localhost/36435] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=519 (was 521), OpenFileDescriptor=814 (was 817), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=593 (was 636), ProcessCount=174 (was 174), AvailableMemoryMB=7585 (was 7624) 2023-07-21 18:14:43,587 WARN [Listener at localhost/36435] hbase.ResourceChecker(130): Thread=519 is superior to 500 2023-07-21 18:14:43,605 INFO [Listener at localhost/36435] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=519, OpenFileDescriptor=814, MaxFileDescriptor=60000, SystemLoadAverage=593, ProcessCount=174, AvailableMemoryMB=7584 2023-07-21 18:14:43,605 WARN [Listener at localhost/36435] hbase.ResourceChecker(130): Thread=519 is superior to 500 2023-07-21 18:14:43,605 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-21 18:14:43,609 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:43,609 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:43,610 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:14:43,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:14:43,610 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:43,611 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 18:14:43,611 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:43,611 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 18:14:43,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:43,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 18:14:43,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:14:43,619 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 18:14:43,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 18:14:43,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:43,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:43,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:14:43,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:14:43,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:43,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:43,633 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45593] to rsgroup master 2023-07-21 18:14:43,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:43,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 539 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53692 deadline: 1689964483633, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. 2023-07-21 18:14:43,634 WARN [Listener at localhost/36435] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 18:14:43,636 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:43,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:43,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:43,637 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419, jenkins-hbase4.apache.org:44049, jenkins-hbase4.apache.org:46437], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 18:14:43,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:14:43,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:43,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:14:43,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:43,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-21 18:14:43,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:43,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 18:14:43,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:43,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:43,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:14:43,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:43,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:43,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419] to rsgroup oldGroup 2023-07-21 18:14:43,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:43,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 18:14:43,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:43,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:43,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 18:14:43,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41863,1689963267427, jenkins-hbase4.apache.org,43419,1689963263425] are moved back to default 2023-07-21 18:14:43,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-21 18:14:43,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:43,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:43,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:43,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-21 18:14:43,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:43,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-21 18:14:43,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:43,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:14:43,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:43,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-21 18:14:43,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:43,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-21 18:14:43,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 18:14:43,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:43,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 18:14:43,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:14:43,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:43,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:43,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44049] to rsgroup anotherRSGroup 2023-07-21 18:14:43,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:43,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-21 18:14:43,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 18:14:43,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:43,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 18:14:43,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 18:14:43,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,44049,1689963263942] are moved back to default 2023-07-21 18:14:43,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-21 18:14:43,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:43,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:43,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:43,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-21 18:14:43,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:43,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-21 18:14:43,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:43,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-21 18:14:43,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:43,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 573 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:53692 deadline: 1689964483690, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-21 18:14:43,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-21 18:14:43,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:43,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 575 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:53692 deadline: 1689964483692, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-21 18:14:43,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-21 18:14:43,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:43,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 577 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:53692 deadline: 1689964483693, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-21 18:14:43,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-21 18:14:43,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:43,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 579 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:53692 deadline: 1689964483694, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-21 18:14:43,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:43,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:43,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:14:43,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:14:43,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:43,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44049] to rsgroup default 2023-07-21 18:14:43,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:43,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-21 18:14:43,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 18:14:43,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:43,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 18:14:43,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-21 18:14:43,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,44049,1689963263942] are moved back to anotherRSGroup 2023-07-21 18:14:43,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-21 18:14:43,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:43,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-21 18:14:43,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:43,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 18:14:43,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:43,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-21 18:14:43,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:14:43,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:14:43,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:14:43,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:43,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419] to rsgroup default 2023-07-21 18:14:43,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:43,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 18:14:43,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:43,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:43,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-21 18:14:43,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41863,1689963267427, jenkins-hbase4.apache.org,43419,1689963263425] are moved back to oldGroup 2023-07-21 18:14:43,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-21 18:14:43,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:43,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-21 18:14:43,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:43,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:43,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 18:14:43,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:14:43,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:14:43,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:14:43,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:43,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 18:14:43,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:43,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 18:14:43,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:43,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 18:14:43,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:14:43,735 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 18:14:43,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 18:14:43,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:43,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:43,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:14:43,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:14:43,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:43,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:43,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45593] to rsgroup master 2023-07-21 18:14:43,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:43,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 615 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53692 deadline: 1689964483745, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. 2023-07-21 18:14:43,746 WARN [Listener at localhost/36435] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 18:14:43,747 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:43,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:43,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:43,748 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419, jenkins-hbase4.apache.org:44049, jenkins-hbase4.apache.org:46437], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 18:14:43,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:14:43,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:43,766 INFO [Listener at localhost/36435] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=523 (was 519) Potentially hanging thread: hconnection-0x54c19f7b-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x54c19f7b-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x54c19f7b-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x54c19f7b-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=814 (was 814), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=593 (was 593), ProcessCount=174 (was 174), AvailableMemoryMB=7585 (was 7584) - AvailableMemoryMB LEAK? - 2023-07-21 18:14:43,766 WARN [Listener at localhost/36435] hbase.ResourceChecker(130): Thread=523 is superior to 500 2023-07-21 18:14:43,782 INFO [Listener at localhost/36435] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=523, OpenFileDescriptor=814, MaxFileDescriptor=60000, SystemLoadAverage=593, ProcessCount=174, AvailableMemoryMB=7585 2023-07-21 18:14:43,782 WARN [Listener at localhost/36435] hbase.ResourceChecker(130): Thread=523 is superior to 500 2023-07-21 18:14:43,783 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-21 18:14:43,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:43,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:43,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:14:43,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:14:43,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:43,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 18:14:43,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:43,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 18:14:43,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:43,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 18:14:43,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:14:43,798 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 18:14:43,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 18:14:43,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:43,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:43,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:14:43,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:14:43,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:43,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:43,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45593] to rsgroup master 2023-07-21 18:14:43,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:43,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 643 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53692 deadline: 1689964483812, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. 2023-07-21 18:14:43,813 WARN [Listener at localhost/36435] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 18:14:43,815 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:43,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:43,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:43,816 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419, jenkins-hbase4.apache.org:44049, jenkins-hbase4.apache.org:46437], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 18:14:43,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:14:43,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:43,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:14:43,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:43,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-21 18:14:43,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 18:14:43,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:43,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:43,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:43,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:14:43,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:43,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:43,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419] to rsgroup oldgroup 2023-07-21 18:14:43,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 18:14:43,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:43,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:43,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:43,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 18:14:43,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41863,1689963267427, jenkins-hbase4.apache.org,43419,1689963263425] are moved back to default 2023-07-21 18:14:43,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-21 18:14:43,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:43,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:43,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:43,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-21 18:14:43,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:43,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 18:14:43,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-21 18:14:43,841 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 18:14:43,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 114 2023-07-21 18:14:43,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-21 18:14:43,843 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 18:14:43,844 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:43,844 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:43,844 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:43,848 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 18:14:43,849 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/testRename/b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:43,850 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/testRename/b673e11a35285324ee5d9a3e17b12d76 empty. 2023-07-21 18:14:43,850 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/testRename/b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:43,850 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-21 18:14:43,869 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-21 18:14:43,870 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => b673e11a35285324ee5d9a3e17b12d76, NAME => 'testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp 2023-07-21 18:14:43,885 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:43,885 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing b673e11a35285324ee5d9a3e17b12d76, disabling compactions & flushes 2023-07-21 18:14:43,885 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. 2023-07-21 18:14:43,885 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. 2023-07-21 18:14:43,885 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. after waiting 0 ms 2023-07-21 18:14:43,885 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. 2023-07-21 18:14:43,885 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. 2023-07-21 18:14:43,885 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for b673e11a35285324ee5d9a3e17b12d76: 2023-07-21 18:14:43,888 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 18:14:43,889 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689963283889"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963283889"}]},"ts":"1689963283889"} 2023-07-21 18:14:43,890 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 18:14:43,891 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 18:14:43,891 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963283891"}]},"ts":"1689963283891"} 2023-07-21 18:14:43,892 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-21 18:14:43,895 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 18:14:43,895 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 18:14:43,895 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 18:14:43,895 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 18:14:43,896 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=b673e11a35285324ee5d9a3e17b12d76, ASSIGN}] 2023-07-21 18:14:43,898 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=b673e11a35285324ee5d9a3e17b12d76, ASSIGN 2023-07-21 18:14:43,898 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=b673e11a35285324ee5d9a3e17b12d76, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46437,1689963263715; forceNewPlan=false, retain=false 2023-07-21 18:14:43,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-21 18:14:44,049 INFO [jenkins-hbase4:45593] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 18:14:44,050 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=b673e11a35285324ee5d9a3e17b12d76, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:44,050 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689963284050"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963284050"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963284050"}]},"ts":"1689963284050"} 2023-07-21 18:14:44,052 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=115, state=RUNNABLE; OpenRegionProcedure b673e11a35285324ee5d9a3e17b12d76, server=jenkins-hbase4.apache.org,46437,1689963263715}] 2023-07-21 18:14:44,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-21 18:14:44,208 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. 2023-07-21 18:14:44,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b673e11a35285324ee5d9a3e17b12d76, NAME => 'testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76.', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:14:44,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:44,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:44,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:44,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:44,210 INFO [StoreOpener-b673e11a35285324ee5d9a3e17b12d76-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:44,211 DEBUG [StoreOpener-b673e11a35285324ee5d9a3e17b12d76-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/testRename/b673e11a35285324ee5d9a3e17b12d76/tr 2023-07-21 18:14:44,211 DEBUG [StoreOpener-b673e11a35285324ee5d9a3e17b12d76-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/testRename/b673e11a35285324ee5d9a3e17b12d76/tr 2023-07-21 18:14:44,212 INFO [StoreOpener-b673e11a35285324ee5d9a3e17b12d76-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b673e11a35285324ee5d9a3e17b12d76 columnFamilyName tr 2023-07-21 18:14:44,212 INFO [StoreOpener-b673e11a35285324ee5d9a3e17b12d76-1] regionserver.HStore(310): Store=b673e11a35285324ee5d9a3e17b12d76/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:44,213 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/testRename/b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:44,213 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/testRename/b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:44,216 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:44,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/testRename/b673e11a35285324ee5d9a3e17b12d76/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:44,218 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b673e11a35285324ee5d9a3e17b12d76; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11002320800, jitterRate=0.024670973420143127}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:44,219 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b673e11a35285324ee5d9a3e17b12d76: 2023-07-21 18:14:44,219 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76., pid=116, masterSystemTime=1689963284204 2023-07-21 18:14:44,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. 2023-07-21 18:14:44,221 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. 2023-07-21 18:14:44,221 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=b673e11a35285324ee5d9a3e17b12d76, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:44,221 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689963284221"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963284221"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963284221"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963284221"}]},"ts":"1689963284221"} 2023-07-21 18:14:44,224 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=115 2023-07-21 18:14:44,224 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=115, state=SUCCESS; OpenRegionProcedure b673e11a35285324ee5d9a3e17b12d76, server=jenkins-hbase4.apache.org,46437,1689963263715 in 171 msec 2023-07-21 18:14:44,233 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-21 18:14:44,233 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=b673e11a35285324ee5d9a3e17b12d76, ASSIGN in 328 msec 2023-07-21 18:14:44,233 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 18:14:44,233 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963284233"}]},"ts":"1689963284233"} 2023-07-21 18:14:44,235 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-21 18:14:44,238 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 18:14:44,240 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; CreateTableProcedure table=testRename in 400 msec 2023-07-21 18:14:44,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-21 18:14:44,446 INFO [Listener at localhost/36435] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 114 completed 2023-07-21 18:14:44,446 DEBUG [Listener at localhost/36435] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-21 18:14:44,446 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:44,450 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-21 18:14:44,450 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:44,451 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-21 18:14:44,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-21 18:14:44,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 18:14:44,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:44,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:44,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:44,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-21 18:14:44,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(345): Moving region b673e11a35285324ee5d9a3e17b12d76 to RSGroup oldgroup 2023-07-21 18:14:44,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 18:14:44,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 18:14:44,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 18:14:44,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 18:14:44,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 18:14:44,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=b673e11a35285324ee5d9a3e17b12d76, REOPEN/MOVE 2023-07-21 18:14:44,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-21 18:14:44,459 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=b673e11a35285324ee5d9a3e17b12d76, REOPEN/MOVE 2023-07-21 18:14:44,460 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=b673e11a35285324ee5d9a3e17b12d76, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:44,460 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689963284460"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963284460"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963284460"}]},"ts":"1689963284460"} 2023-07-21 18:14:44,461 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure b673e11a35285324ee5d9a3e17b12d76, server=jenkins-hbase4.apache.org,46437,1689963263715}] 2023-07-21 18:14:44,615 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:44,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b673e11a35285324ee5d9a3e17b12d76, disabling compactions & flushes 2023-07-21 18:14:44,616 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. 2023-07-21 18:14:44,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. 2023-07-21 18:14:44,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. after waiting 0 ms 2023-07-21 18:14:44,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. 2023-07-21 18:14:44,620 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/testRename/b673e11a35285324ee5d9a3e17b12d76/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 18:14:44,620 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. 2023-07-21 18:14:44,621 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b673e11a35285324ee5d9a3e17b12d76: 2023-07-21 18:14:44,621 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b673e11a35285324ee5d9a3e17b12d76 move to jenkins-hbase4.apache.org,43419,1689963263425 record at close sequenceid=2 2023-07-21 18:14:44,622 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:44,622 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=b673e11a35285324ee5d9a3e17b12d76, regionState=CLOSED 2023-07-21 18:14:44,623 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689963284622"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963284622"}]},"ts":"1689963284622"} 2023-07-21 18:14:44,625 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-21 18:14:44,625 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure b673e11a35285324ee5d9a3e17b12d76, server=jenkins-hbase4.apache.org,46437,1689963263715 in 163 msec 2023-07-21 18:14:44,626 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=b673e11a35285324ee5d9a3e17b12d76, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43419,1689963263425; forceNewPlan=false, retain=false 2023-07-21 18:14:44,776 INFO [jenkins-hbase4:45593] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 18:14:44,776 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=b673e11a35285324ee5d9a3e17b12d76, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:44,776 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689963284776"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963284776"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963284776"}]},"ts":"1689963284776"} 2023-07-21 18:14:44,778 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=117, state=RUNNABLE; OpenRegionProcedure b673e11a35285324ee5d9a3e17b12d76, server=jenkins-hbase4.apache.org,43419,1689963263425}] 2023-07-21 18:14:44,934 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. 2023-07-21 18:14:44,934 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b673e11a35285324ee5d9a3e17b12d76, NAME => 'testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76.', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:14:44,934 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:44,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:44,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:44,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:44,936 INFO [StoreOpener-b673e11a35285324ee5d9a3e17b12d76-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:44,937 DEBUG [StoreOpener-b673e11a35285324ee5d9a3e17b12d76-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/testRename/b673e11a35285324ee5d9a3e17b12d76/tr 2023-07-21 18:14:44,937 DEBUG [StoreOpener-b673e11a35285324ee5d9a3e17b12d76-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/testRename/b673e11a35285324ee5d9a3e17b12d76/tr 2023-07-21 18:14:44,937 INFO [StoreOpener-b673e11a35285324ee5d9a3e17b12d76-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b673e11a35285324ee5d9a3e17b12d76 columnFamilyName tr 2023-07-21 18:14:44,938 INFO [StoreOpener-b673e11a35285324ee5d9a3e17b12d76-1] regionserver.HStore(310): Store=b673e11a35285324ee5d9a3e17b12d76/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:44,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/testRename/b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:44,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/testRename/b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:44,943 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:44,944 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b673e11a35285324ee5d9a3e17b12d76; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11349883040, jitterRate=0.05704022943973541}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:44,944 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b673e11a35285324ee5d9a3e17b12d76: 2023-07-21 18:14:44,944 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76., pid=119, masterSystemTime=1689963284930 2023-07-21 18:14:44,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. 2023-07-21 18:14:44,946 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. 2023-07-21 18:14:44,946 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=b673e11a35285324ee5d9a3e17b12d76, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:44,946 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689963284946"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963284946"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963284946"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963284946"}]},"ts":"1689963284946"} 2023-07-21 18:14:44,950 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=117 2023-07-21 18:14:44,950 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=117, state=SUCCESS; OpenRegionProcedure b673e11a35285324ee5d9a3e17b12d76, server=jenkins-hbase4.apache.org,43419,1689963263425 in 170 msec 2023-07-21 18:14:44,951 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=b673e11a35285324ee5d9a3e17b12d76, REOPEN/MOVE in 492 msec 2023-07-21 18:14:45,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure.ProcedureSyncWait(216): waitFor pid=117 2023-07-21 18:14:45,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-21 18:14:45,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:45,463 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:45,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:45,468 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:45,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-21 18:14:45,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 18:14:45,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-21 18:14:45,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:45,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-21 18:14:45,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 18:14:45,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:14:45,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:45,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-21 18:14:45,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 18:14:45,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 18:14:45,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:45,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:45,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 18:14:45,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:14:45,487 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:45,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:45,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44049] to rsgroup normal 2023-07-21 18:14:45,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 18:14:45,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 18:14:45,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:45,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:45,495 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 18:14:45,497 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 18:14:45,497 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,44049,1689963263942] are moved back to default 2023-07-21 18:14:45,497 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-21 18:14:45,497 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:45,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:45,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:45,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-21 18:14:45,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:45,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 18:14:45,507 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-21 18:14:45,510 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 18:14:45,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 120 2023-07-21 18:14:45,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-21 18:14:45,512 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 18:14:45,513 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 18:14:45,513 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:45,515 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:45,515 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 18:14:45,517 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 18:14:45,519 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/unmovedTable/205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:45,520 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/unmovedTable/205e1c24c493094d8d96bedf6e852764 empty. 2023-07-21 18:14:45,520 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/unmovedTable/205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:45,520 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-21 18:14:45,537 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-21 18:14:45,539 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 205e1c24c493094d8d96bedf6e852764, NAME => 'unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp 2023-07-21 18:14:45,605 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:45,605 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 205e1c24c493094d8d96bedf6e852764, disabling compactions & flushes 2023-07-21 18:14:45,605 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. 2023-07-21 18:14:45,605 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. 2023-07-21 18:14:45,605 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. after waiting 0 ms 2023-07-21 18:14:45,605 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. 2023-07-21 18:14:45,605 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. 2023-07-21 18:14:45,605 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 205e1c24c493094d8d96bedf6e852764: 2023-07-21 18:14:45,608 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 18:14:45,609 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689963285609"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963285609"}]},"ts":"1689963285609"} 2023-07-21 18:14:45,610 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 18:14:45,611 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 18:14:45,611 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963285611"}]},"ts":"1689963285611"} 2023-07-21 18:14:45,612 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-21 18:14:45,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-21 18:14:45,617 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=205e1c24c493094d8d96bedf6e852764, ASSIGN}] 2023-07-21 18:14:45,619 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=205e1c24c493094d8d96bedf6e852764, ASSIGN 2023-07-21 18:14:45,620 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=205e1c24c493094d8d96bedf6e852764, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46437,1689963263715; forceNewPlan=false, retain=false 2023-07-21 18:14:45,771 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=205e1c24c493094d8d96bedf6e852764, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:45,771 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689963285771"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963285771"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963285771"}]},"ts":"1689963285771"} 2023-07-21 18:14:45,773 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=121, state=RUNNABLE; OpenRegionProcedure 205e1c24c493094d8d96bedf6e852764, server=jenkins-hbase4.apache.org,46437,1689963263715}] 2023-07-21 18:14:45,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-21 18:14:45,928 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. 2023-07-21 18:14:45,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 205e1c24c493094d8d96bedf6e852764, NAME => 'unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764.', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:14:45,929 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:45,929 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:45,929 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:45,929 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:45,930 INFO [StoreOpener-205e1c24c493094d8d96bedf6e852764-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:45,932 DEBUG [StoreOpener-205e1c24c493094d8d96bedf6e852764-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/unmovedTable/205e1c24c493094d8d96bedf6e852764/ut 2023-07-21 18:14:45,932 DEBUG [StoreOpener-205e1c24c493094d8d96bedf6e852764-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/unmovedTable/205e1c24c493094d8d96bedf6e852764/ut 2023-07-21 18:14:45,932 INFO [StoreOpener-205e1c24c493094d8d96bedf6e852764-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 205e1c24c493094d8d96bedf6e852764 columnFamilyName ut 2023-07-21 18:14:45,932 INFO [StoreOpener-205e1c24c493094d8d96bedf6e852764-1] regionserver.HStore(310): Store=205e1c24c493094d8d96bedf6e852764/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:45,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/unmovedTable/205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:45,934 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/unmovedTable/205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:45,936 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:45,938 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/unmovedTable/205e1c24c493094d8d96bedf6e852764/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:45,938 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 205e1c24c493094d8d96bedf6e852764; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11430967680, jitterRate=0.06459182500839233}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:45,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 205e1c24c493094d8d96bedf6e852764: 2023-07-21 18:14:45,939 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764., pid=122, masterSystemTime=1689963285924 2023-07-21 18:14:45,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. 2023-07-21 18:14:45,941 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. 2023-07-21 18:14:45,941 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=205e1c24c493094d8d96bedf6e852764, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:45,941 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689963285941"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963285941"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963285941"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963285941"}]},"ts":"1689963285941"} 2023-07-21 18:14:45,944 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=121 2023-07-21 18:14:45,944 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=121, state=SUCCESS; OpenRegionProcedure 205e1c24c493094d8d96bedf6e852764, server=jenkins-hbase4.apache.org,46437,1689963263715 in 169 msec 2023-07-21 18:14:45,945 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-21 18:14:45,945 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=205e1c24c493094d8d96bedf6e852764, ASSIGN in 327 msec 2023-07-21 18:14:45,946 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 18:14:45,946 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963285946"}]},"ts":"1689963285946"} 2023-07-21 18:14:45,947 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-21 18:14:45,950 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 18:14:45,951 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; CreateTableProcedure table=unmovedTable in 444 msec 2023-07-21 18:14:46,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-21 18:14:46,115 INFO [Listener at localhost/36435] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 120 completed 2023-07-21 18:14:46,115 DEBUG [Listener at localhost/36435] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-21 18:14:46,115 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:46,119 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-21 18:14:46,120 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:46,120 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-21 18:14:46,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-21 18:14:46,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 18:14:46,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 18:14:46,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:46,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:46,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 18:14:46,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-21 18:14:46,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(345): Moving region 205e1c24c493094d8d96bedf6e852764 to RSGroup normal 2023-07-21 18:14:46,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=205e1c24c493094d8d96bedf6e852764, REOPEN/MOVE 2023-07-21 18:14:46,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-21 18:14:46,128 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=205e1c24c493094d8d96bedf6e852764, REOPEN/MOVE 2023-07-21 18:14:46,128 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=205e1c24c493094d8d96bedf6e852764, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:46,129 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689963286128"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963286128"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963286128"}]},"ts":"1689963286128"} 2023-07-21 18:14:46,130 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure 205e1c24c493094d8d96bedf6e852764, server=jenkins-hbase4.apache.org,46437,1689963263715}] 2023-07-21 18:14:46,282 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:46,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 205e1c24c493094d8d96bedf6e852764, disabling compactions & flushes 2023-07-21 18:14:46,284 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. 2023-07-21 18:14:46,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. 2023-07-21 18:14:46,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. after waiting 0 ms 2023-07-21 18:14:46,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. 2023-07-21 18:14:46,288 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/unmovedTable/205e1c24c493094d8d96bedf6e852764/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 18:14:46,288 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. 2023-07-21 18:14:46,288 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 205e1c24c493094d8d96bedf6e852764: 2023-07-21 18:14:46,288 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 205e1c24c493094d8d96bedf6e852764 move to jenkins-hbase4.apache.org,44049,1689963263942 record at close sequenceid=2 2023-07-21 18:14:46,289 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:46,290 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=205e1c24c493094d8d96bedf6e852764, regionState=CLOSED 2023-07-21 18:14:46,290 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689963286290"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963286290"}]},"ts":"1689963286290"} 2023-07-21 18:14:46,292 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-21 18:14:46,292 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure 205e1c24c493094d8d96bedf6e852764, server=jenkins-hbase4.apache.org,46437,1689963263715 in 161 msec 2023-07-21 18:14:46,293 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=205e1c24c493094d8d96bedf6e852764, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44049,1689963263942; forceNewPlan=false, retain=false 2023-07-21 18:14:46,443 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=205e1c24c493094d8d96bedf6e852764, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:46,444 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689963286443"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963286443"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963286443"}]},"ts":"1689963286443"} 2023-07-21 18:14:46,445 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure 205e1c24c493094d8d96bedf6e852764, server=jenkins-hbase4.apache.org,44049,1689963263942}] 2023-07-21 18:14:46,600 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. 2023-07-21 18:14:46,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 205e1c24c493094d8d96bedf6e852764, NAME => 'unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764.', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:14:46,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:46,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:46,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:46,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:46,602 INFO [StoreOpener-205e1c24c493094d8d96bedf6e852764-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:46,603 DEBUG [StoreOpener-205e1c24c493094d8d96bedf6e852764-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/unmovedTable/205e1c24c493094d8d96bedf6e852764/ut 2023-07-21 18:14:46,603 DEBUG [StoreOpener-205e1c24c493094d8d96bedf6e852764-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/unmovedTable/205e1c24c493094d8d96bedf6e852764/ut 2023-07-21 18:14:46,603 INFO [StoreOpener-205e1c24c493094d8d96bedf6e852764-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 205e1c24c493094d8d96bedf6e852764 columnFamilyName ut 2023-07-21 18:14:46,604 INFO [StoreOpener-205e1c24c493094d8d96bedf6e852764-1] regionserver.HStore(310): Store=205e1c24c493094d8d96bedf6e852764/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:46,604 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/unmovedTable/205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:46,605 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/unmovedTable/205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:46,608 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:46,609 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 205e1c24c493094d8d96bedf6e852764; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10364670880, jitterRate=-0.03471480309963226}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:46,609 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 205e1c24c493094d8d96bedf6e852764: 2023-07-21 18:14:46,609 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764., pid=125, masterSystemTime=1689963286597 2023-07-21 18:14:46,611 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. 2023-07-21 18:14:46,611 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. 2023-07-21 18:14:46,611 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=205e1c24c493094d8d96bedf6e852764, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:46,611 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689963286611"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963286611"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963286611"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963286611"}]},"ts":"1689963286611"} 2023-07-21 18:14:46,614 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-21 18:14:46,614 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure 205e1c24c493094d8d96bedf6e852764, server=jenkins-hbase4.apache.org,44049,1689963263942 in 168 msec 2023-07-21 18:14:46,615 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=205e1c24c493094d8d96bedf6e852764, REOPEN/MOVE in 487 msec 2023-07-21 18:14:47,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-21 18:14:47,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-21 18:14:47,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:47,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:47,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:47,141 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:47,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-21 18:14:47,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 18:14:47,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-21 18:14:47,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:47,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-21 18:14:47,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 18:14:47,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-21 18:14:47,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 18:14:47,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:47,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:47,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 18:14:47,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-21 18:14:47,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-21 18:14:47,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:47,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:47,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-21 18:14:47,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:47,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-21 18:14:47,176 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 18:14:47,176 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-21 18:14:47,176 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 18:14:47,190 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:47,190 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:47,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-21 18:14:47,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 18:14:47,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:47,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:47,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 18:14:47,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 18:14:47,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-21 18:14:47,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(345): Moving region 205e1c24c493094d8d96bedf6e852764 to RSGroup default 2023-07-21 18:14:47,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=205e1c24c493094d8d96bedf6e852764, REOPEN/MOVE 2023-07-21 18:14:47,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-21 18:14:47,203 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=205e1c24c493094d8d96bedf6e852764, REOPEN/MOVE 2023-07-21 18:14:47,204 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=205e1c24c493094d8d96bedf6e852764, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:47,204 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689963287204"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963287204"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963287204"}]},"ts":"1689963287204"} 2023-07-21 18:14:47,206 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure 205e1c24c493094d8d96bedf6e852764, server=jenkins-hbase4.apache.org,44049,1689963263942}] 2023-07-21 18:14:47,359 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:47,360 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 205e1c24c493094d8d96bedf6e852764, disabling compactions & flushes 2023-07-21 18:14:47,360 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. 2023-07-21 18:14:47,360 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. 2023-07-21 18:14:47,360 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. after waiting 0 ms 2023-07-21 18:14:47,360 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. 2023-07-21 18:14:47,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/unmovedTable/205e1c24c493094d8d96bedf6e852764/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 18:14:47,365 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. 2023-07-21 18:14:47,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 205e1c24c493094d8d96bedf6e852764: 2023-07-21 18:14:47,365 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 205e1c24c493094d8d96bedf6e852764 move to jenkins-hbase4.apache.org,46437,1689963263715 record at close sequenceid=5 2023-07-21 18:14:47,367 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:47,368 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=205e1c24c493094d8d96bedf6e852764, regionState=CLOSED 2023-07-21 18:14:47,368 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689963287368"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963287368"}]},"ts":"1689963287368"} 2023-07-21 18:14:47,371 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-21 18:14:47,371 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure 205e1c24c493094d8d96bedf6e852764, server=jenkins-hbase4.apache.org,44049,1689963263942 in 163 msec 2023-07-21 18:14:47,372 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=205e1c24c493094d8d96bedf6e852764, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46437,1689963263715; forceNewPlan=false, retain=false 2023-07-21 18:14:47,522 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=205e1c24c493094d8d96bedf6e852764, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:47,522 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689963287522"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963287522"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963287522"}]},"ts":"1689963287522"} 2023-07-21 18:14:47,525 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure 205e1c24c493094d8d96bedf6e852764, server=jenkins-hbase4.apache.org,46437,1689963263715}] 2023-07-21 18:14:47,635 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 18:14:47,681 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. 2023-07-21 18:14:47,682 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 205e1c24c493094d8d96bedf6e852764, NAME => 'unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764.', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:14:47,682 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:47,682 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:47,682 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:47,682 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:47,684 INFO [StoreOpener-205e1c24c493094d8d96bedf6e852764-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:47,685 DEBUG [StoreOpener-205e1c24c493094d8d96bedf6e852764-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/unmovedTable/205e1c24c493094d8d96bedf6e852764/ut 2023-07-21 18:14:47,685 DEBUG [StoreOpener-205e1c24c493094d8d96bedf6e852764-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/unmovedTable/205e1c24c493094d8d96bedf6e852764/ut 2023-07-21 18:14:47,685 INFO [StoreOpener-205e1c24c493094d8d96bedf6e852764-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 205e1c24c493094d8d96bedf6e852764 columnFamilyName ut 2023-07-21 18:14:47,686 INFO [StoreOpener-205e1c24c493094d8d96bedf6e852764-1] regionserver.HStore(310): Store=205e1c24c493094d8d96bedf6e852764/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:47,687 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/unmovedTable/205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:47,688 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/unmovedTable/205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:47,691 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:47,692 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 205e1c24c493094d8d96bedf6e852764; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10423248640, jitterRate=-0.029259324073791504}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:47,692 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 205e1c24c493094d8d96bedf6e852764: 2023-07-21 18:14:47,692 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764., pid=128, masterSystemTime=1689963287677 2023-07-21 18:14:47,694 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. 2023-07-21 18:14:47,694 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. 2023-07-21 18:14:47,694 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=205e1c24c493094d8d96bedf6e852764, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:47,694 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689963287694"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963287694"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963287694"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963287694"}]},"ts":"1689963287694"} 2023-07-21 18:14:47,697 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-21 18:14:47,697 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure 205e1c24c493094d8d96bedf6e852764, server=jenkins-hbase4.apache.org,46437,1689963263715 in 171 msec 2023-07-21 18:14:47,701 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=205e1c24c493094d8d96bedf6e852764, REOPEN/MOVE in 495 msec 2023-07-21 18:14:48,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-21 18:14:48,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-21 18:14:48,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:48,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44049] to rsgroup default 2023-07-21 18:14:48,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 18:14:48,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:48,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:48,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 18:14:48,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 18:14:48,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-21 18:14:48,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,44049,1689963263942] are moved back to normal 2023-07-21 18:14:48,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-21 18:14:48,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:48,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-21 18:14:48,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:48,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:48,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 18:14:48,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-21 18:14:48,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:14:48,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:14:48,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:14:48,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:48,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 18:14:48,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:48,219 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 18:14:48,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:48,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 18:14:48,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 18:14:48,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:14:48,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-21 18:14:48,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:48,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 18:14:48,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:14:48,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-21 18:14:48,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(345): Moving region b673e11a35285324ee5d9a3e17b12d76 to RSGroup default 2023-07-21 18:14:48,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=b673e11a35285324ee5d9a3e17b12d76, REOPEN/MOVE 2023-07-21 18:14:48,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-21 18:14:48,233 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=b673e11a35285324ee5d9a3e17b12d76, REOPEN/MOVE 2023-07-21 18:14:48,233 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=b673e11a35285324ee5d9a3e17b12d76, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:48,233 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689963288233"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963288233"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963288233"}]},"ts":"1689963288233"} 2023-07-21 18:14:48,235 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure b673e11a35285324ee5d9a3e17b12d76, server=jenkins-hbase4.apache.org,43419,1689963263425}] 2023-07-21 18:14:48,388 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:48,389 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b673e11a35285324ee5d9a3e17b12d76, disabling compactions & flushes 2023-07-21 18:14:48,389 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. 2023-07-21 18:14:48,389 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. 2023-07-21 18:14:48,389 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. after waiting 0 ms 2023-07-21 18:14:48,389 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. 2023-07-21 18:14:48,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/testRename/b673e11a35285324ee5d9a3e17b12d76/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 18:14:48,395 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. 2023-07-21 18:14:48,395 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b673e11a35285324ee5d9a3e17b12d76: 2023-07-21 18:14:48,395 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b673e11a35285324ee5d9a3e17b12d76 move to jenkins-hbase4.apache.org,44049,1689963263942 record at close sequenceid=5 2023-07-21 18:14:48,397 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:48,397 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=b673e11a35285324ee5d9a3e17b12d76, regionState=CLOSED 2023-07-21 18:14:48,397 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689963288397"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963288397"}]},"ts":"1689963288397"} 2023-07-21 18:14:48,403 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-21 18:14:48,403 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure b673e11a35285324ee5d9a3e17b12d76, server=jenkins-hbase4.apache.org,43419,1689963263425 in 166 msec 2023-07-21 18:14:48,404 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=b673e11a35285324ee5d9a3e17b12d76, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44049,1689963263942; forceNewPlan=false, retain=false 2023-07-21 18:14:48,554 INFO [jenkins-hbase4:45593] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 18:14:48,555 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=b673e11a35285324ee5d9a3e17b12d76, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:48,555 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689963288555"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963288555"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963288555"}]},"ts":"1689963288555"} 2023-07-21 18:14:48,557 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure b673e11a35285324ee5d9a3e17b12d76, server=jenkins-hbase4.apache.org,44049,1689963263942}] 2023-07-21 18:14:48,712 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. 2023-07-21 18:14:48,712 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b673e11a35285324ee5d9a3e17b12d76, NAME => 'testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76.', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:14:48,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:48,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:48,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:48,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:48,714 INFO [StoreOpener-b673e11a35285324ee5d9a3e17b12d76-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:48,715 DEBUG [StoreOpener-b673e11a35285324ee5d9a3e17b12d76-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/testRename/b673e11a35285324ee5d9a3e17b12d76/tr 2023-07-21 18:14:48,715 DEBUG [StoreOpener-b673e11a35285324ee5d9a3e17b12d76-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/testRename/b673e11a35285324ee5d9a3e17b12d76/tr 2023-07-21 18:14:48,716 INFO [StoreOpener-b673e11a35285324ee5d9a3e17b12d76-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b673e11a35285324ee5d9a3e17b12d76 columnFamilyName tr 2023-07-21 18:14:48,716 INFO [StoreOpener-b673e11a35285324ee5d9a3e17b12d76-1] regionserver.HStore(310): Store=b673e11a35285324ee5d9a3e17b12d76/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:48,717 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/testRename/b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:48,719 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/testRename/b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:48,722 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:48,723 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b673e11a35285324ee5d9a3e17b12d76; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11047814400, jitterRate=0.0289078950881958}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:48,723 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b673e11a35285324ee5d9a3e17b12d76: 2023-07-21 18:14:48,723 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76., pid=131, masterSystemTime=1689963288708 2023-07-21 18:14:48,725 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. 2023-07-21 18:14:48,725 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. 2023-07-21 18:14:48,725 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=b673e11a35285324ee5d9a3e17b12d76, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:48,726 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689963288725"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963288725"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963288725"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963288725"}]},"ts":"1689963288725"} 2023-07-21 18:14:48,728 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-21 18:14:48,729 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure b673e11a35285324ee5d9a3e17b12d76, server=jenkins-hbase4.apache.org,44049,1689963263942 in 171 msec 2023-07-21 18:14:48,730 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=b673e11a35285324ee5d9a3e17b12d76, REOPEN/MOVE in 497 msec 2023-07-21 18:14:49,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-21 18:14:49,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-21 18:14:49,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:49,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419] to rsgroup default 2023-07-21 18:14:49,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:49,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 18:14:49,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:14:49,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-21 18:14:49,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41863,1689963267427, jenkins-hbase4.apache.org,43419,1689963263425] are moved back to newgroup 2023-07-21 18:14:49,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-21 18:14:49,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:49,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-21 18:14:49,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:49,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 18:14:49,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:14:49,256 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 18:14:49,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 18:14:49,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:49,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:49,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:14:49,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:14:49,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:49,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:49,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45593] to rsgroup master 2023-07-21 18:14:49,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:49,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 763 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53692 deadline: 1689964489270, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. 2023-07-21 18:14:49,271 WARN [Listener at localhost/36435] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 18:14:49,273 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:49,274 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:49,274 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:49,275 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419, jenkins-hbase4.apache.org:44049, jenkins-hbase4.apache.org:46437], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 18:14:49,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:14:49,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:49,295 INFO [Listener at localhost/36435] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=517 (was 523), OpenFileDescriptor=799 (was 814), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=561 (was 593), ProcessCount=174 (was 174), AvailableMemoryMB=7441 (was 7585) 2023-07-21 18:14:49,295 WARN [Listener at localhost/36435] hbase.ResourceChecker(130): Thread=517 is superior to 500 2023-07-21 18:14:49,312 INFO [Listener at localhost/36435] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=517, OpenFileDescriptor=799, MaxFileDescriptor=60000, SystemLoadAverage=561, ProcessCount=174, AvailableMemoryMB=7440 2023-07-21 18:14:49,312 WARN [Listener at localhost/36435] hbase.ResourceChecker(130): Thread=517 is superior to 500 2023-07-21 18:14:49,312 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-21 18:14:49,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:49,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:49,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:14:49,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:14:49,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:49,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 18:14:49,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:49,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 18:14:49,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:49,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 18:14:49,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:14:49,328 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 18:14:49,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 18:14:49,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:49,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:49,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:14:49,335 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:14:49,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:49,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:49,341 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45593] to rsgroup master 2023-07-21 18:14:49,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:49,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 791 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53692 deadline: 1689964489341, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. 2023-07-21 18:14:49,341 WARN [Listener at localhost/36435] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 18:14:49,343 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:49,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:49,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:49,344 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419, jenkins-hbase4.apache.org:44049, jenkins-hbase4.apache.org:46437], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 18:14:49,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:14:49,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:49,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-21 18:14:49,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 18:14:49,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-21 18:14:49,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-21 18:14:49,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-21 18:14:49,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:49,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-21 18:14:49,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:49,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 803 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:53692 deadline: 1689964489355, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-21 18:14:49,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-21 18:14:49,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:49,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 806 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:53692 deadline: 1689964489357, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-21 18:14:49,360 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-21 18:14:49,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-21 18:14:49,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-21 18:14:49,367 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:49,367 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 810 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:53692 deadline: 1689964489366, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-21 18:14:49,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:49,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:49,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:14:49,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:14:49,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:49,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 18:14:49,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:49,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 18:14:49,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:49,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 18:14:49,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:14:49,383 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 18:14:49,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 18:14:49,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:49,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:49,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:14:49,394 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:14:49,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:49,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:49,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45593] to rsgroup master 2023-07-21 18:14:49,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:49,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 834 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53692 deadline: 1689964489403, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. 2023-07-21 18:14:49,407 WARN [Listener at localhost/36435] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 18:14:49,409 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:49,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:49,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:49,410 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419, jenkins-hbase4.apache.org:44049, jenkins-hbase4.apache.org:46437], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 18:14:49,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:14:49,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:49,434 INFO [Listener at localhost/36435] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=521 (was 517) Potentially hanging thread: hconnection-0x23967352-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x54c19f7b-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x23967352-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x54c19f7b-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=799 (was 799), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=561 (was 561), ProcessCount=174 (was 174), AvailableMemoryMB=7441 (was 7440) - AvailableMemoryMB LEAK? - 2023-07-21 18:14:49,434 WARN [Listener at localhost/36435] hbase.ResourceChecker(130): Thread=521 is superior to 500 2023-07-21 18:14:49,452 INFO [Listener at localhost/36435] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=521, OpenFileDescriptor=799, MaxFileDescriptor=60000, SystemLoadAverage=561, ProcessCount=174, AvailableMemoryMB=7440 2023-07-21 18:14:49,453 WARN [Listener at localhost/36435] hbase.ResourceChecker(130): Thread=521 is superior to 500 2023-07-21 18:14:49,453 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-21 18:14:49,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:49,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:49,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:14:49,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:14:49,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:49,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 18:14:49,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:49,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 18:14:49,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:49,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 18:14:49,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:14:49,467 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 18:14:49,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 18:14:49,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:49,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:49,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:14:49,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:14:49,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:49,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:49,481 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45593] to rsgroup master 2023-07-21 18:14:49,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:49,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 862 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53692 deadline: 1689964489481, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. 2023-07-21 18:14:49,481 WARN [Listener at localhost/36435] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 18:14:49,483 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:49,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:49,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:49,484 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419, jenkins-hbase4.apache.org:44049, jenkins-hbase4.apache.org:46437], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 18:14:49,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:14:49,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:49,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:14:49,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:49,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_2143274979 2023-07-21 18:14:49,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2143274979 2023-07-21 18:14:49,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:49,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:49,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:49,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:14:49,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:49,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:49,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419] to rsgroup Group_testDisabledTableMove_2143274979 2023-07-21 18:14:49,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2143274979 2023-07-21 18:14:49,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:49,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:49,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:49,500 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 18:14:49,500 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41863,1689963267427, jenkins-hbase4.apache.org,43419,1689963263425] are moved back to default 2023-07-21 18:14:49,500 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_2143274979 2023-07-21 18:14:49,500 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:49,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:49,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:49,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_2143274979 2023-07-21 18:14:49,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:49,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 18:14:49,507 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-21 18:14:49,509 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 18:14:49,509 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 132 2023-07-21 18:14:49,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-21 18:14:49,511 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2143274979 2023-07-21 18:14:49,511 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:49,511 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:49,512 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:49,516 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 18:14:49,520 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/0474ff2b8ef881b5f0eea076f1993b72 2023-07-21 18:14:49,520 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/b86fd58c5a419a5cb8d1f07350aa7f5e 2023-07-21 18:14:49,520 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/b64ec615858fc226e895f7ac82106fc1 2023-07-21 18:14:49,520 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/6dc84ab56db4baa558feb6e08391f81a 2023-07-21 18:14:49,520 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/f279e2defcb6e80e7325c06238ef6f71 2023-07-21 18:14:49,521 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/0474ff2b8ef881b5f0eea076f1993b72 empty. 2023-07-21 18:14:49,521 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/6dc84ab56db4baa558feb6e08391f81a empty. 2023-07-21 18:14:49,521 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/f279e2defcb6e80e7325c06238ef6f71 empty. 2023-07-21 18:14:49,522 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/b64ec615858fc226e895f7ac82106fc1 empty. 2023-07-21 18:14:49,522 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/b86fd58c5a419a5cb8d1f07350aa7f5e empty. 2023-07-21 18:14:49,522 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/f279e2defcb6e80e7325c06238ef6f71 2023-07-21 18:14:49,522 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/6dc84ab56db4baa558feb6e08391f81a 2023-07-21 18:14:49,522 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/0474ff2b8ef881b5f0eea076f1993b72 2023-07-21 18:14:49,523 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/b86fd58c5a419a5cb8d1f07350aa7f5e 2023-07-21 18:14:49,523 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/b64ec615858fc226e895f7ac82106fc1 2023-07-21 18:14:49,523 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-21 18:14:49,544 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-21 18:14:49,545 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0474ff2b8ef881b5f0eea076f1993b72, NAME => 'Group_testDisabledTableMove,,1689963289506.0474ff2b8ef881b5f0eea076f1993b72.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp 2023-07-21 18:14:49,545 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 6dc84ab56db4baa558feb6e08391f81a, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689963289506.6dc84ab56db4baa558feb6e08391f81a.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp 2023-07-21 18:14:49,545 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => f279e2defcb6e80e7325c06238ef6f71, NAME => 'Group_testDisabledTableMove,aaaaa,1689963289506.f279e2defcb6e80e7325c06238ef6f71.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp 2023-07-21 18:14:49,559 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689963289506.6dc84ab56db4baa558feb6e08391f81a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:49,559 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 6dc84ab56db4baa558feb6e08391f81a, disabling compactions & flushes 2023-07-21 18:14:49,559 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689963289506.6dc84ab56db4baa558feb6e08391f81a. 2023-07-21 18:14:49,559 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689963289506.6dc84ab56db4baa558feb6e08391f81a. 2023-07-21 18:14:49,559 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689963289506.6dc84ab56db4baa558feb6e08391f81a. after waiting 0 ms 2023-07-21 18:14:49,559 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689963289506.6dc84ab56db4baa558feb6e08391f81a. 2023-07-21 18:14:49,559 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689963289506.6dc84ab56db4baa558feb6e08391f81a. 2023-07-21 18:14:49,560 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 6dc84ab56db4baa558feb6e08391f81a: 2023-07-21 18:14:49,560 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => b86fd58c5a419a5cb8d1f07350aa7f5e, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689963289506.b86fd58c5a419a5cb8d1f07350aa7f5e.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp 2023-07-21 18:14:49,562 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689963289506.0474ff2b8ef881b5f0eea076f1993b72.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:49,563 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 0474ff2b8ef881b5f0eea076f1993b72, disabling compactions & flushes 2023-07-21 18:14:49,563 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689963289506.0474ff2b8ef881b5f0eea076f1993b72. 2023-07-21 18:14:49,563 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689963289506.0474ff2b8ef881b5f0eea076f1993b72. 2023-07-21 18:14:49,563 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689963289506.0474ff2b8ef881b5f0eea076f1993b72. after waiting 0 ms 2023-07-21 18:14:49,563 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689963289506.0474ff2b8ef881b5f0eea076f1993b72. 2023-07-21 18:14:49,563 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689963289506.0474ff2b8ef881b5f0eea076f1993b72. 2023-07-21 18:14:49,563 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 0474ff2b8ef881b5f0eea076f1993b72: 2023-07-21 18:14:49,563 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => b64ec615858fc226e895f7ac82106fc1, NAME => 'Group_testDisabledTableMove,zzzzz,1689963289506.b64ec615858fc226e895f7ac82106fc1.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp 2023-07-21 18:14:49,565 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689963289506.f279e2defcb6e80e7325c06238ef6f71.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:49,566 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing f279e2defcb6e80e7325c06238ef6f71, disabling compactions & flushes 2023-07-21 18:14:49,566 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689963289506.f279e2defcb6e80e7325c06238ef6f71. 2023-07-21 18:14:49,566 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689963289506.f279e2defcb6e80e7325c06238ef6f71. 2023-07-21 18:14:49,566 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689963289506.f279e2defcb6e80e7325c06238ef6f71. after waiting 0 ms 2023-07-21 18:14:49,566 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689963289506.f279e2defcb6e80e7325c06238ef6f71. 2023-07-21 18:14:49,566 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689963289506.f279e2defcb6e80e7325c06238ef6f71. 2023-07-21 18:14:49,566 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for f279e2defcb6e80e7325c06238ef6f71: 2023-07-21 18:14:49,574 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689963289506.b86fd58c5a419a5cb8d1f07350aa7f5e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:49,574 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing b86fd58c5a419a5cb8d1f07350aa7f5e, disabling compactions & flushes 2023-07-21 18:14:49,574 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689963289506.b86fd58c5a419a5cb8d1f07350aa7f5e. 2023-07-21 18:14:49,574 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689963289506.b86fd58c5a419a5cb8d1f07350aa7f5e. 2023-07-21 18:14:49,574 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689963289506.b86fd58c5a419a5cb8d1f07350aa7f5e. after waiting 0 ms 2023-07-21 18:14:49,574 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689963289506.b86fd58c5a419a5cb8d1f07350aa7f5e. 2023-07-21 18:14:49,575 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689963289506.b86fd58c5a419a5cb8d1f07350aa7f5e. 2023-07-21 18:14:49,575 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for b86fd58c5a419a5cb8d1f07350aa7f5e: 2023-07-21 18:14:49,577 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689963289506.b64ec615858fc226e895f7ac82106fc1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:49,577 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing b64ec615858fc226e895f7ac82106fc1, disabling compactions & flushes 2023-07-21 18:14:49,577 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689963289506.b64ec615858fc226e895f7ac82106fc1. 2023-07-21 18:14:49,577 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689963289506.b64ec615858fc226e895f7ac82106fc1. 2023-07-21 18:14:49,577 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689963289506.b64ec615858fc226e895f7ac82106fc1. after waiting 0 ms 2023-07-21 18:14:49,577 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689963289506.b64ec615858fc226e895f7ac82106fc1. 2023-07-21 18:14:49,577 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689963289506.b64ec615858fc226e895f7ac82106fc1. 2023-07-21 18:14:49,577 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for b64ec615858fc226e895f7ac82106fc1: 2023-07-21 18:14:49,580 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 18:14:49,581 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689963289506.6dc84ab56db4baa558feb6e08391f81a.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689963289581"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963289581"}]},"ts":"1689963289581"} 2023-07-21 18:14:49,581 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689963289506.0474ff2b8ef881b5f0eea076f1993b72.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689963289581"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963289581"}]},"ts":"1689963289581"} 2023-07-21 18:14:49,581 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689963289506.f279e2defcb6e80e7325c06238ef6f71.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689963289581"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963289581"}]},"ts":"1689963289581"} 2023-07-21 18:14:49,581 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689963289506.b86fd58c5a419a5cb8d1f07350aa7f5e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689963289581"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963289581"}]},"ts":"1689963289581"} 2023-07-21 18:14:49,581 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689963289506.b64ec615858fc226e895f7ac82106fc1.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689963289581"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963289581"}]},"ts":"1689963289581"} 2023-07-21 18:14:49,583 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-21 18:14:49,584 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 18:14:49,584 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963289584"}]},"ts":"1689963289584"} 2023-07-21 18:14:49,585 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-21 18:14:49,588 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 18:14:49,588 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 18:14:49,588 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 18:14:49,588 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 18:14:49,588 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0474ff2b8ef881b5f0eea076f1993b72, ASSIGN}, {pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f279e2defcb6e80e7325c06238ef6f71, ASSIGN}, {pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6dc84ab56db4baa558feb6e08391f81a, ASSIGN}, {pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b86fd58c5a419a5cb8d1f07350aa7f5e, ASSIGN}, {pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b64ec615858fc226e895f7ac82106fc1, ASSIGN}] 2023-07-21 18:14:49,590 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6dc84ab56db4baa558feb6e08391f81a, ASSIGN 2023-07-21 18:14:49,591 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0474ff2b8ef881b5f0eea076f1993b72, ASSIGN 2023-07-21 18:14:49,591 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f279e2defcb6e80e7325c06238ef6f71, ASSIGN 2023-07-21 18:14:49,591 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b86fd58c5a419a5cb8d1f07350aa7f5e, ASSIGN 2023-07-21 18:14:49,591 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6dc84ab56db4baa558feb6e08391f81a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44049,1689963263942; forceNewPlan=false, retain=false 2023-07-21 18:14:49,591 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f279e2defcb6e80e7325c06238ef6f71, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44049,1689963263942; forceNewPlan=false, retain=false 2023-07-21 18:14:49,591 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b86fd58c5a419a5cb8d1f07350aa7f5e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46437,1689963263715; forceNewPlan=false, retain=false 2023-07-21 18:14:49,591 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b64ec615858fc226e895f7ac82106fc1, ASSIGN 2023-07-21 18:14:49,591 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0474ff2b8ef881b5f0eea076f1993b72, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46437,1689963263715; forceNewPlan=false, retain=false 2023-07-21 18:14:49,592 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b64ec615858fc226e895f7ac82106fc1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44049,1689963263942; forceNewPlan=false, retain=false 2023-07-21 18:14:49,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-21 18:14:49,742 INFO [jenkins-hbase4:45593] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-21 18:14:49,746 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=b86fd58c5a419a5cb8d1f07350aa7f5e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:49,746 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=6dc84ab56db4baa558feb6e08391f81a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:49,746 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=f279e2defcb6e80e7325c06238ef6f71, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:49,746 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=b64ec615858fc226e895f7ac82106fc1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:49,746 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689963289506.f279e2defcb6e80e7325c06238ef6f71.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689963289746"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963289746"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963289746"}]},"ts":"1689963289746"} 2023-07-21 18:14:49,746 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689963289506.b64ec615858fc226e895f7ac82106fc1.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689963289745"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963289745"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963289745"}]},"ts":"1689963289745"} 2023-07-21 18:14:49,746 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=0474ff2b8ef881b5f0eea076f1993b72, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:49,746 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689963289506.6dc84ab56db4baa558feb6e08391f81a.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689963289746"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963289746"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963289746"}]},"ts":"1689963289746"} 2023-07-21 18:14:49,746 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689963289506.0474ff2b8ef881b5f0eea076f1993b72.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689963289745"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963289745"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963289745"}]},"ts":"1689963289745"} 2023-07-21 18:14:49,746 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689963289506.b86fd58c5a419a5cb8d1f07350aa7f5e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689963289745"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963289745"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963289745"}]},"ts":"1689963289745"} 2023-07-21 18:14:49,747 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=134, state=RUNNABLE; OpenRegionProcedure f279e2defcb6e80e7325c06238ef6f71, server=jenkins-hbase4.apache.org,44049,1689963263942}] 2023-07-21 18:14:49,748 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=137, state=RUNNABLE; OpenRegionProcedure b64ec615858fc226e895f7ac82106fc1, server=jenkins-hbase4.apache.org,44049,1689963263942}] 2023-07-21 18:14:49,749 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=140, ppid=135, state=RUNNABLE; OpenRegionProcedure 6dc84ab56db4baa558feb6e08391f81a, server=jenkins-hbase4.apache.org,44049,1689963263942}] 2023-07-21 18:14:49,751 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=133, state=RUNNABLE; OpenRegionProcedure 0474ff2b8ef881b5f0eea076f1993b72, server=jenkins-hbase4.apache.org,46437,1689963263715}] 2023-07-21 18:14:49,752 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=136, state=RUNNABLE; OpenRegionProcedure b86fd58c5a419a5cb8d1f07350aa7f5e, server=jenkins-hbase4.apache.org,46437,1689963263715}] 2023-07-21 18:14:49,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-21 18:14:49,903 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689963289506.f279e2defcb6e80e7325c06238ef6f71. 2023-07-21 18:14:49,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f279e2defcb6e80e7325c06238ef6f71, NAME => 'Group_testDisabledTableMove,aaaaa,1689963289506.f279e2defcb6e80e7325c06238ef6f71.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-21 18:14:49,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove f279e2defcb6e80e7325c06238ef6f71 2023-07-21 18:14:49,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689963289506.f279e2defcb6e80e7325c06238ef6f71.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:49,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f279e2defcb6e80e7325c06238ef6f71 2023-07-21 18:14:49,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f279e2defcb6e80e7325c06238ef6f71 2023-07-21 18:14:49,907 INFO [StoreOpener-f279e2defcb6e80e7325c06238ef6f71-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f279e2defcb6e80e7325c06238ef6f71 2023-07-21 18:14:49,907 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689963289506.0474ff2b8ef881b5f0eea076f1993b72. 2023-07-21 18:14:49,907 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0474ff2b8ef881b5f0eea076f1993b72, NAME => 'Group_testDisabledTableMove,,1689963289506.0474ff2b8ef881b5f0eea076f1993b72.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-21 18:14:49,907 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 0474ff2b8ef881b5f0eea076f1993b72 2023-07-21 18:14:49,907 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689963289506.0474ff2b8ef881b5f0eea076f1993b72.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:49,907 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0474ff2b8ef881b5f0eea076f1993b72 2023-07-21 18:14:49,907 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0474ff2b8ef881b5f0eea076f1993b72 2023-07-21 18:14:49,909 INFO [StoreOpener-0474ff2b8ef881b5f0eea076f1993b72-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0474ff2b8ef881b5f0eea076f1993b72 2023-07-21 18:14:49,909 DEBUG [StoreOpener-f279e2defcb6e80e7325c06238ef6f71-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/f279e2defcb6e80e7325c06238ef6f71/f 2023-07-21 18:14:49,909 DEBUG [StoreOpener-f279e2defcb6e80e7325c06238ef6f71-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/f279e2defcb6e80e7325c06238ef6f71/f 2023-07-21 18:14:49,909 INFO [StoreOpener-f279e2defcb6e80e7325c06238ef6f71-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f279e2defcb6e80e7325c06238ef6f71 columnFamilyName f 2023-07-21 18:14:49,910 INFO [StoreOpener-f279e2defcb6e80e7325c06238ef6f71-1] regionserver.HStore(310): Store=f279e2defcb6e80e7325c06238ef6f71/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:49,910 DEBUG [StoreOpener-0474ff2b8ef881b5f0eea076f1993b72-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/0474ff2b8ef881b5f0eea076f1993b72/f 2023-07-21 18:14:49,910 DEBUG [StoreOpener-0474ff2b8ef881b5f0eea076f1993b72-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/0474ff2b8ef881b5f0eea076f1993b72/f 2023-07-21 18:14:49,911 INFO [StoreOpener-0474ff2b8ef881b5f0eea076f1993b72-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0474ff2b8ef881b5f0eea076f1993b72 columnFamilyName f 2023-07-21 18:14:49,911 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/f279e2defcb6e80e7325c06238ef6f71 2023-07-21 18:14:49,912 INFO [StoreOpener-0474ff2b8ef881b5f0eea076f1993b72-1] regionserver.HStore(310): Store=0474ff2b8ef881b5f0eea076f1993b72/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:49,912 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/f279e2defcb6e80e7325c06238ef6f71 2023-07-21 18:14:49,912 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/0474ff2b8ef881b5f0eea076f1993b72 2023-07-21 18:14:49,913 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/0474ff2b8ef881b5f0eea076f1993b72 2023-07-21 18:14:49,917 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f279e2defcb6e80e7325c06238ef6f71 2023-07-21 18:14:49,917 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0474ff2b8ef881b5f0eea076f1993b72 2023-07-21 18:14:49,923 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/f279e2defcb6e80e7325c06238ef6f71/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:49,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/0474ff2b8ef881b5f0eea076f1993b72/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:49,925 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f279e2defcb6e80e7325c06238ef6f71; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11698673760, jitterRate=0.08952389657497406}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:49,925 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f279e2defcb6e80e7325c06238ef6f71: 2023-07-21 18:14:49,925 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0474ff2b8ef881b5f0eea076f1993b72; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10211801440, jitterRate=-0.048951879143714905}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:49,925 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0474ff2b8ef881b5f0eea076f1993b72: 2023-07-21 18:14:49,925 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689963289506.f279e2defcb6e80e7325c06238ef6f71., pid=138, masterSystemTime=1689963289899 2023-07-21 18:14:49,926 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689963289506.0474ff2b8ef881b5f0eea076f1993b72., pid=141, masterSystemTime=1689963289903 2023-07-21 18:14:49,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689963289506.f279e2defcb6e80e7325c06238ef6f71. 2023-07-21 18:14:49,928 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689963289506.f279e2defcb6e80e7325c06238ef6f71. 2023-07-21 18:14:49,928 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689963289506.b64ec615858fc226e895f7ac82106fc1. 2023-07-21 18:14:49,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b64ec615858fc226e895f7ac82106fc1, NAME => 'Group_testDisabledTableMove,zzzzz,1689963289506.b64ec615858fc226e895f7ac82106fc1.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-21 18:14:49,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove b64ec615858fc226e895f7ac82106fc1 2023-07-21 18:14:49,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689963289506.b64ec615858fc226e895f7ac82106fc1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:49,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b64ec615858fc226e895f7ac82106fc1 2023-07-21 18:14:49,929 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b64ec615858fc226e895f7ac82106fc1 2023-07-21 18:14:49,929 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=f279e2defcb6e80e7325c06238ef6f71, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:49,929 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689963289506.f279e2defcb6e80e7325c06238ef6f71.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689963289929"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963289929"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963289929"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963289929"}]},"ts":"1689963289929"} 2023-07-21 18:14:49,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689963289506.0474ff2b8ef881b5f0eea076f1993b72. 2023-07-21 18:14:49,930 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689963289506.0474ff2b8ef881b5f0eea076f1993b72. 2023-07-21 18:14:49,930 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689963289506.b86fd58c5a419a5cb8d1f07350aa7f5e. 2023-07-21 18:14:49,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b86fd58c5a419a5cb8d1f07350aa7f5e, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689963289506.b86fd58c5a419a5cb8d1f07350aa7f5e.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-21 18:14:49,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove b86fd58c5a419a5cb8d1f07350aa7f5e 2023-07-21 18:14:49,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689963289506.b86fd58c5a419a5cb8d1f07350aa7f5e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:49,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b86fd58c5a419a5cb8d1f07350aa7f5e 2023-07-21 18:14:49,930 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=0474ff2b8ef881b5f0eea076f1993b72, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:49,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b86fd58c5a419a5cb8d1f07350aa7f5e 2023-07-21 18:14:49,931 INFO [StoreOpener-b64ec615858fc226e895f7ac82106fc1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b64ec615858fc226e895f7ac82106fc1 2023-07-21 18:14:49,931 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689963289506.0474ff2b8ef881b5f0eea076f1993b72.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689963289930"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963289930"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963289930"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963289930"}]},"ts":"1689963289930"} 2023-07-21 18:14:49,932 INFO [StoreOpener-b86fd58c5a419a5cb8d1f07350aa7f5e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b86fd58c5a419a5cb8d1f07350aa7f5e 2023-07-21 18:14:49,933 DEBUG [StoreOpener-b64ec615858fc226e895f7ac82106fc1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/b64ec615858fc226e895f7ac82106fc1/f 2023-07-21 18:14:49,933 DEBUG [StoreOpener-b64ec615858fc226e895f7ac82106fc1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/b64ec615858fc226e895f7ac82106fc1/f 2023-07-21 18:14:49,933 INFO [StoreOpener-b64ec615858fc226e895f7ac82106fc1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b64ec615858fc226e895f7ac82106fc1 columnFamilyName f 2023-07-21 18:14:49,934 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=134 2023-07-21 18:14:49,934 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=134, state=SUCCESS; OpenRegionProcedure f279e2defcb6e80e7325c06238ef6f71, server=jenkins-hbase4.apache.org,44049,1689963263942 in 184 msec 2023-07-21 18:14:49,935 INFO [StoreOpener-b64ec615858fc226e895f7ac82106fc1-1] regionserver.HStore(310): Store=b64ec615858fc226e895f7ac82106fc1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:49,935 DEBUG [StoreOpener-b86fd58c5a419a5cb8d1f07350aa7f5e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/b86fd58c5a419a5cb8d1f07350aa7f5e/f 2023-07-21 18:14:49,936 DEBUG [StoreOpener-b86fd58c5a419a5cb8d1f07350aa7f5e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/b86fd58c5a419a5cb8d1f07350aa7f5e/f 2023-07-21 18:14:49,936 INFO [StoreOpener-b86fd58c5a419a5cb8d1f07350aa7f5e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b86fd58c5a419a5cb8d1f07350aa7f5e columnFamilyName f 2023-07-21 18:14:49,937 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=133 2023-07-21 18:14:49,937 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f279e2defcb6e80e7325c06238ef6f71, ASSIGN in 346 msec 2023-07-21 18:14:49,937 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=133, state=SUCCESS; OpenRegionProcedure 0474ff2b8ef881b5f0eea076f1993b72, server=jenkins-hbase4.apache.org,46437,1689963263715 in 181 msec 2023-07-21 18:14:49,938 INFO [StoreOpener-b86fd58c5a419a5cb8d1f07350aa7f5e-1] regionserver.HStore(310): Store=b86fd58c5a419a5cb8d1f07350aa7f5e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:49,938 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0474ff2b8ef881b5f0eea076f1993b72, ASSIGN in 349 msec 2023-07-21 18:14:49,938 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/b64ec615858fc226e895f7ac82106fc1 2023-07-21 18:14:49,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/b64ec615858fc226e895f7ac82106fc1 2023-07-21 18:14:49,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/b86fd58c5a419a5cb8d1f07350aa7f5e 2023-07-21 18:14:49,941 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/b86fd58c5a419a5cb8d1f07350aa7f5e 2023-07-21 18:14:49,945 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b64ec615858fc226e895f7ac82106fc1 2023-07-21 18:14:49,945 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b86fd58c5a419a5cb8d1f07350aa7f5e 2023-07-21 18:14:49,947 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/b86fd58c5a419a5cb8d1f07350aa7f5e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:49,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/b64ec615858fc226e895f7ac82106fc1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:49,948 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b86fd58c5a419a5cb8d1f07350aa7f5e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10980655200, jitterRate=0.022653207182884216}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:49,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b86fd58c5a419a5cb8d1f07350aa7f5e: 2023-07-21 18:14:49,948 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b64ec615858fc226e895f7ac82106fc1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10887906080, jitterRate=0.01401527225971222}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:49,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b64ec615858fc226e895f7ac82106fc1: 2023-07-21 18:14:49,949 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689963289506.b86fd58c5a419a5cb8d1f07350aa7f5e., pid=142, masterSystemTime=1689963289903 2023-07-21 18:14:49,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689963289506.b86fd58c5a419a5cb8d1f07350aa7f5e. 2023-07-21 18:14:49,951 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689963289506.b86fd58c5a419a5cb8d1f07350aa7f5e. 2023-07-21 18:14:49,951 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=b86fd58c5a419a5cb8d1f07350aa7f5e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:49,951 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689963289506.b86fd58c5a419a5cb8d1f07350aa7f5e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689963289951"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963289951"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963289951"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963289951"}]},"ts":"1689963289951"} 2023-07-21 18:14:49,954 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=136 2023-07-21 18:14:49,954 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=136, state=SUCCESS; OpenRegionProcedure b86fd58c5a419a5cb8d1f07350aa7f5e, server=jenkins-hbase4.apache.org,46437,1689963263715 in 201 msec 2023-07-21 18:14:49,955 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689963289506.b64ec615858fc226e895f7ac82106fc1., pid=139, masterSystemTime=1689963289899 2023-07-21 18:14:49,956 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b86fd58c5a419a5cb8d1f07350aa7f5e, ASSIGN in 366 msec 2023-07-21 18:14:49,956 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689963289506.b64ec615858fc226e895f7ac82106fc1. 2023-07-21 18:14:49,956 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689963289506.b64ec615858fc226e895f7ac82106fc1. 2023-07-21 18:14:49,956 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689963289506.6dc84ab56db4baa558feb6e08391f81a. 2023-07-21 18:14:49,956 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6dc84ab56db4baa558feb6e08391f81a, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689963289506.6dc84ab56db4baa558feb6e08391f81a.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-21 18:14:49,956 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=b64ec615858fc226e895f7ac82106fc1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:49,957 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689963289506.b64ec615858fc226e895f7ac82106fc1.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689963289956"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963289956"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963289956"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963289956"}]},"ts":"1689963289956"} 2023-07-21 18:14:49,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 6dc84ab56db4baa558feb6e08391f81a 2023-07-21 18:14:49,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689963289506.6dc84ab56db4baa558feb6e08391f81a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:49,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6dc84ab56db4baa558feb6e08391f81a 2023-07-21 18:14:49,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6dc84ab56db4baa558feb6e08391f81a 2023-07-21 18:14:49,959 INFO [StoreOpener-6dc84ab56db4baa558feb6e08391f81a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6dc84ab56db4baa558feb6e08391f81a 2023-07-21 18:14:49,960 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=137 2023-07-21 18:14:49,960 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=137, state=SUCCESS; OpenRegionProcedure b64ec615858fc226e895f7ac82106fc1, server=jenkins-hbase4.apache.org,44049,1689963263942 in 210 msec 2023-07-21 18:14:49,960 DEBUG [StoreOpener-6dc84ab56db4baa558feb6e08391f81a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/6dc84ab56db4baa558feb6e08391f81a/f 2023-07-21 18:14:49,960 DEBUG [StoreOpener-6dc84ab56db4baa558feb6e08391f81a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/6dc84ab56db4baa558feb6e08391f81a/f 2023-07-21 18:14:49,961 INFO [StoreOpener-6dc84ab56db4baa558feb6e08391f81a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6dc84ab56db4baa558feb6e08391f81a columnFamilyName f 2023-07-21 18:14:49,961 INFO [StoreOpener-6dc84ab56db4baa558feb6e08391f81a-1] regionserver.HStore(310): Store=6dc84ab56db4baa558feb6e08391f81a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:49,962 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b64ec615858fc226e895f7ac82106fc1, ASSIGN in 372 msec 2023-07-21 18:14:49,962 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/6dc84ab56db4baa558feb6e08391f81a 2023-07-21 18:14:49,962 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/6dc84ab56db4baa558feb6e08391f81a 2023-07-21 18:14:49,965 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6dc84ab56db4baa558feb6e08391f81a 2023-07-21 18:14:49,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/6dc84ab56db4baa558feb6e08391f81a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:49,967 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6dc84ab56db4baa558feb6e08391f81a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10204967200, jitterRate=-0.049588367342948914}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:49,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6dc84ab56db4baa558feb6e08391f81a: 2023-07-21 18:14:49,968 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689963289506.6dc84ab56db4baa558feb6e08391f81a., pid=140, masterSystemTime=1689963289899 2023-07-21 18:14:49,969 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689963289506.6dc84ab56db4baa558feb6e08391f81a. 2023-07-21 18:14:49,969 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689963289506.6dc84ab56db4baa558feb6e08391f81a. 2023-07-21 18:14:49,969 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=6dc84ab56db4baa558feb6e08391f81a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:49,969 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689963289506.6dc84ab56db4baa558feb6e08391f81a.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689963289969"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963289969"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963289969"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963289969"}]},"ts":"1689963289969"} 2023-07-21 18:14:49,972 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=135 2023-07-21 18:14:49,972 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=135, state=SUCCESS; OpenRegionProcedure 6dc84ab56db4baa558feb6e08391f81a, server=jenkins-hbase4.apache.org,44049,1689963263942 in 222 msec 2023-07-21 18:14:49,973 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=135, resume processing ppid=132 2023-07-21 18:14:49,973 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6dc84ab56db4baa558feb6e08391f81a, ASSIGN in 384 msec 2023-07-21 18:14:49,974 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 18:14:49,974 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963289974"}]},"ts":"1689963289974"} 2023-07-21 18:14:49,975 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-21 18:14:49,976 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testDisabledTableMove' 2023-07-21 18:14:49,976 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-21 18:14:49,977 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 18:14:49,979 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 471 msec 2023-07-21 18:14:49,979 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-21 18:14:50,112 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-21 18:14:50,113 INFO [Listener at localhost/36435] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 132 completed 2023-07-21 18:14:50,113 DEBUG [Listener at localhost/36435] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-21 18:14:50,113 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:50,117 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-21 18:14:50,117 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:50,117 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-21 18:14:50,118 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:50,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-21 18:14:50,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 18:14:50,125 INFO [Listener at localhost/36435] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-21 18:14:50,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-21 18:14:50,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=143, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-21 18:14:50,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-21 18:14:50,128 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963290128"}]},"ts":"1689963290128"} 2023-07-21 18:14:50,130 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-21 18:14:50,131 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-21 18:14:50,132 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0474ff2b8ef881b5f0eea076f1993b72, UNASSIGN}, {pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f279e2defcb6e80e7325c06238ef6f71, UNASSIGN}, {pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6dc84ab56db4baa558feb6e08391f81a, UNASSIGN}, {pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b86fd58c5a419a5cb8d1f07350aa7f5e, UNASSIGN}, {pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b64ec615858fc226e895f7ac82106fc1, UNASSIGN}] 2023-07-21 18:14:50,133 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b86fd58c5a419a5cb8d1f07350aa7f5e, UNASSIGN 2023-07-21 18:14:50,133 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b64ec615858fc226e895f7ac82106fc1, UNASSIGN 2023-07-21 18:14:50,133 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6dc84ab56db4baa558feb6e08391f81a, UNASSIGN 2023-07-21 18:14:50,133 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f279e2defcb6e80e7325c06238ef6f71, UNASSIGN 2023-07-21 18:14:50,134 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0474ff2b8ef881b5f0eea076f1993b72, UNASSIGN 2023-07-21 18:14:50,134 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=b86fd58c5a419a5cb8d1f07350aa7f5e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:50,134 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=b64ec615858fc226e895f7ac82106fc1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:50,134 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689963289506.b86fd58c5a419a5cb8d1f07350aa7f5e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689963290134"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963290134"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963290134"}]},"ts":"1689963290134"} 2023-07-21 18:14:50,134 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689963289506.b64ec615858fc226e895f7ac82106fc1.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689963290134"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963290134"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963290134"}]},"ts":"1689963290134"} 2023-07-21 18:14:50,135 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=147, state=RUNNABLE; CloseRegionProcedure b86fd58c5a419a5cb8d1f07350aa7f5e, server=jenkins-hbase4.apache.org,46437,1689963263715}] 2023-07-21 18:14:50,136 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=148, state=RUNNABLE; CloseRegionProcedure b64ec615858fc226e895f7ac82106fc1, server=jenkins-hbase4.apache.org,44049,1689963263942}] 2023-07-21 18:14:50,140 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=6dc84ab56db4baa558feb6e08391f81a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:50,140 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=0474ff2b8ef881b5f0eea076f1993b72, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:50,141 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689963289506.6dc84ab56db4baa558feb6e08391f81a.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689963290140"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963290140"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963290140"}]},"ts":"1689963290140"} 2023-07-21 18:14:50,140 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=f279e2defcb6e80e7325c06238ef6f71, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:50,141 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689963289506.0474ff2b8ef881b5f0eea076f1993b72.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689963290140"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963290140"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963290140"}]},"ts":"1689963290140"} 2023-07-21 18:14:50,141 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689963289506.f279e2defcb6e80e7325c06238ef6f71.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689963290140"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963290140"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963290140"}]},"ts":"1689963290140"} 2023-07-21 18:14:50,142 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=151, ppid=146, state=RUNNABLE; CloseRegionProcedure 6dc84ab56db4baa558feb6e08391f81a, server=jenkins-hbase4.apache.org,44049,1689963263942}] 2023-07-21 18:14:50,142 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=144, state=RUNNABLE; CloseRegionProcedure 0474ff2b8ef881b5f0eea076f1993b72, server=jenkins-hbase4.apache.org,46437,1689963263715}] 2023-07-21 18:14:50,143 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=145, state=RUNNABLE; CloseRegionProcedure f279e2defcb6e80e7325c06238ef6f71, server=jenkins-hbase4.apache.org,44049,1689963263942}] 2023-07-21 18:14:50,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-21 18:14:50,292 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b86fd58c5a419a5cb8d1f07350aa7f5e 2023-07-21 18:14:50,292 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b64ec615858fc226e895f7ac82106fc1 2023-07-21 18:14:50,293 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b86fd58c5a419a5cb8d1f07350aa7f5e, disabling compactions & flushes 2023-07-21 18:14:50,294 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b64ec615858fc226e895f7ac82106fc1, disabling compactions & flushes 2023-07-21 18:14:50,294 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689963289506.b86fd58c5a419a5cb8d1f07350aa7f5e. 2023-07-21 18:14:50,294 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689963289506.b64ec615858fc226e895f7ac82106fc1. 2023-07-21 18:14:50,294 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689963289506.b64ec615858fc226e895f7ac82106fc1. 2023-07-21 18:14:50,294 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689963289506.b86fd58c5a419a5cb8d1f07350aa7f5e. 2023-07-21 18:14:50,294 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689963289506.b64ec615858fc226e895f7ac82106fc1. after waiting 0 ms 2023-07-21 18:14:50,294 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689963289506.b64ec615858fc226e895f7ac82106fc1. 2023-07-21 18:14:50,294 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689963289506.b86fd58c5a419a5cb8d1f07350aa7f5e. after waiting 0 ms 2023-07-21 18:14:50,294 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689963289506.b86fd58c5a419a5cb8d1f07350aa7f5e. 2023-07-21 18:14:50,298 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/b86fd58c5a419a5cb8d1f07350aa7f5e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 18:14:50,298 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/b64ec615858fc226e895f7ac82106fc1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 18:14:50,299 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689963289506.b86fd58c5a419a5cb8d1f07350aa7f5e. 2023-07-21 18:14:50,299 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b86fd58c5a419a5cb8d1f07350aa7f5e: 2023-07-21 18:14:50,299 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689963289506.b64ec615858fc226e895f7ac82106fc1. 2023-07-21 18:14:50,299 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b64ec615858fc226e895f7ac82106fc1: 2023-07-21 18:14:50,300 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b86fd58c5a419a5cb8d1f07350aa7f5e 2023-07-21 18:14:50,300 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0474ff2b8ef881b5f0eea076f1993b72 2023-07-21 18:14:50,301 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0474ff2b8ef881b5f0eea076f1993b72, disabling compactions & flushes 2023-07-21 18:14:50,301 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689963289506.0474ff2b8ef881b5f0eea076f1993b72. 2023-07-21 18:14:50,301 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689963289506.0474ff2b8ef881b5f0eea076f1993b72. 2023-07-21 18:14:50,301 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689963289506.0474ff2b8ef881b5f0eea076f1993b72. after waiting 0 ms 2023-07-21 18:14:50,301 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689963289506.0474ff2b8ef881b5f0eea076f1993b72. 2023-07-21 18:14:50,302 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=b86fd58c5a419a5cb8d1f07350aa7f5e, regionState=CLOSED 2023-07-21 18:14:50,302 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689963289506.b86fd58c5a419a5cb8d1f07350aa7f5e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689963290302"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963290302"}]},"ts":"1689963290302"} 2023-07-21 18:14:50,302 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b64ec615858fc226e895f7ac82106fc1 2023-07-21 18:14:50,302 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6dc84ab56db4baa558feb6e08391f81a 2023-07-21 18:14:50,303 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6dc84ab56db4baa558feb6e08391f81a, disabling compactions & flushes 2023-07-21 18:14:50,303 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689963289506.6dc84ab56db4baa558feb6e08391f81a. 2023-07-21 18:14:50,303 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689963289506.6dc84ab56db4baa558feb6e08391f81a. 2023-07-21 18:14:50,303 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689963289506.6dc84ab56db4baa558feb6e08391f81a. after waiting 0 ms 2023-07-21 18:14:50,303 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689963289506.6dc84ab56db4baa558feb6e08391f81a. 2023-07-21 18:14:50,303 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=b64ec615858fc226e895f7ac82106fc1, regionState=CLOSED 2023-07-21 18:14:50,304 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689963289506.b64ec615858fc226e895f7ac82106fc1.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689963290303"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963290303"}]},"ts":"1689963290303"} 2023-07-21 18:14:50,306 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=147 2023-07-21 18:14:50,306 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=147, state=SUCCESS; CloseRegionProcedure b86fd58c5a419a5cb8d1f07350aa7f5e, server=jenkins-hbase4.apache.org,46437,1689963263715 in 169 msec 2023-07-21 18:14:50,308 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=148 2023-07-21 18:14:50,308 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b86fd58c5a419a5cb8d1f07350aa7f5e, UNASSIGN in 174 msec 2023-07-21 18:14:50,308 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=148, state=SUCCESS; CloseRegionProcedure b64ec615858fc226e895f7ac82106fc1, server=jenkins-hbase4.apache.org,44049,1689963263942 in 170 msec 2023-07-21 18:14:50,309 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b64ec615858fc226e895f7ac82106fc1, UNASSIGN in 176 msec 2023-07-21 18:14:50,311 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/0474ff2b8ef881b5f0eea076f1993b72/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 18:14:50,311 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/6dc84ab56db4baa558feb6e08391f81a/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 18:14:50,312 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689963289506.6dc84ab56db4baa558feb6e08391f81a. 2023-07-21 18:14:50,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6dc84ab56db4baa558feb6e08391f81a: 2023-07-21 18:14:50,312 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689963289506.0474ff2b8ef881b5f0eea076f1993b72. 2023-07-21 18:14:50,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0474ff2b8ef881b5f0eea076f1993b72: 2023-07-21 18:14:50,313 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6dc84ab56db4baa558feb6e08391f81a 2023-07-21 18:14:50,313 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f279e2defcb6e80e7325c06238ef6f71 2023-07-21 18:14:50,314 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f279e2defcb6e80e7325c06238ef6f71, disabling compactions & flushes 2023-07-21 18:14:50,315 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689963289506.f279e2defcb6e80e7325c06238ef6f71. 2023-07-21 18:14:50,315 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=6dc84ab56db4baa558feb6e08391f81a, regionState=CLOSED 2023-07-21 18:14:50,315 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689963289506.f279e2defcb6e80e7325c06238ef6f71. 2023-07-21 18:14:50,315 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689963289506.f279e2defcb6e80e7325c06238ef6f71. after waiting 0 ms 2023-07-21 18:14:50,315 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689963289506.f279e2defcb6e80e7325c06238ef6f71. 2023-07-21 18:14:50,315 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689963289506.6dc84ab56db4baa558feb6e08391f81a.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689963290315"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963290315"}]},"ts":"1689963290315"} 2023-07-21 18:14:50,315 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0474ff2b8ef881b5f0eea076f1993b72 2023-07-21 18:14:50,316 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=0474ff2b8ef881b5f0eea076f1993b72, regionState=CLOSED 2023-07-21 18:14:50,316 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689963289506.0474ff2b8ef881b5f0eea076f1993b72.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689963290316"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963290316"}]},"ts":"1689963290316"} 2023-07-21 18:14:50,319 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=151, resume processing ppid=146 2023-07-21 18:14:50,319 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=146, state=SUCCESS; CloseRegionProcedure 6dc84ab56db4baa558feb6e08391f81a, server=jenkins-hbase4.apache.org,44049,1689963263942 in 175 msec 2023-07-21 18:14:50,319 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/Group_testDisabledTableMove/f279e2defcb6e80e7325c06238ef6f71/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 18:14:50,319 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=144 2023-07-21 18:14:50,319 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=144, state=SUCCESS; CloseRegionProcedure 0474ff2b8ef881b5f0eea076f1993b72, server=jenkins-hbase4.apache.org,46437,1689963263715 in 175 msec 2023-07-21 18:14:50,320 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689963289506.f279e2defcb6e80e7325c06238ef6f71. 2023-07-21 18:14:50,320 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f279e2defcb6e80e7325c06238ef6f71: 2023-07-21 18:14:50,320 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6dc84ab56db4baa558feb6e08391f81a, UNASSIGN in 187 msec 2023-07-21 18:14:50,321 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0474ff2b8ef881b5f0eea076f1993b72, UNASSIGN in 187 msec 2023-07-21 18:14:50,321 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f279e2defcb6e80e7325c06238ef6f71 2023-07-21 18:14:50,321 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=f279e2defcb6e80e7325c06238ef6f71, regionState=CLOSED 2023-07-21 18:14:50,321 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689963289506.f279e2defcb6e80e7325c06238ef6f71.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689963290321"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963290321"}]},"ts":"1689963290321"} 2023-07-21 18:14:50,323 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=145 2023-07-21 18:14:50,323 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=145, state=SUCCESS; CloseRegionProcedure f279e2defcb6e80e7325c06238ef6f71, server=jenkins-hbase4.apache.org,44049,1689963263942 in 179 msec 2023-07-21 18:14:50,325 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=143 2023-07-21 18:14:50,325 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f279e2defcb6e80e7325c06238ef6f71, UNASSIGN in 191 msec 2023-07-21 18:14:50,325 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963290325"}]},"ts":"1689963290325"} 2023-07-21 18:14:50,326 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-21 18:14:50,328 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-21 18:14:50,329 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=143, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 203 msec 2023-07-21 18:14:50,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-21 18:14:50,430 INFO [Listener at localhost/36435] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 143 completed 2023-07-21 18:14:50,431 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_2143274979 2023-07-21 18:14:50,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_2143274979 2023-07-21 18:14:50,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2143274979 2023-07-21 18:14:50,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:50,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:50,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:50,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-21 18:14:50,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_2143274979, current retry=0 2023-07-21 18:14:50,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_2143274979. 2023-07-21 18:14:50,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:50,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:50,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:50,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-21 18:14:50,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 18:14:50,449 INFO [Listener at localhost/36435] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-21 18:14:50,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-21 18:14:50,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:50,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 922 service: MasterService methodName: DisableTable size: 87 connection: 172.31.14.131:53692 deadline: 1689963350449, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-21 18:14:50,450 DEBUG [Listener at localhost/36435] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-21 18:14:50,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-21 18:14:50,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] procedure2.ProcedureExecutor(1029): Stored pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 18:14:50,454 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 18:14:50,454 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_2143274979' 2023-07-21 18:14:50,454 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=155, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 18:14:50,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2143274979 2023-07-21 18:14:50,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:50,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:50,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:50,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-21 18:14:50,462 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/0474ff2b8ef881b5f0eea076f1993b72 2023-07-21 18:14:50,462 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/6dc84ab56db4baa558feb6e08391f81a 2023-07-21 18:14:50,462 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/f279e2defcb6e80e7325c06238ef6f71 2023-07-21 18:14:50,462 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/b86fd58c5a419a5cb8d1f07350aa7f5e 2023-07-21 18:14:50,462 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/b64ec615858fc226e895f7ac82106fc1 2023-07-21 18:14:50,465 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/f279e2defcb6e80e7325c06238ef6f71/f, FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/f279e2defcb6e80e7325c06238ef6f71/recovered.edits] 2023-07-21 18:14:50,465 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/6dc84ab56db4baa558feb6e08391f81a/f, FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/6dc84ab56db4baa558feb6e08391f81a/recovered.edits] 2023-07-21 18:14:50,465 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/b86fd58c5a419a5cb8d1f07350aa7f5e/f, FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/b86fd58c5a419a5cb8d1f07350aa7f5e/recovered.edits] 2023-07-21 18:14:50,466 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/0474ff2b8ef881b5f0eea076f1993b72/f, FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/0474ff2b8ef881b5f0eea076f1993b72/recovered.edits] 2023-07-21 18:14:50,466 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/b64ec615858fc226e895f7ac82106fc1/f, FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/b64ec615858fc226e895f7ac82106fc1/recovered.edits] 2023-07-21 18:14:50,475 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/0474ff2b8ef881b5f0eea076f1993b72/recovered.edits/4.seqid to hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/archive/data/default/Group_testDisabledTableMove/0474ff2b8ef881b5f0eea076f1993b72/recovered.edits/4.seqid 2023-07-21 18:14:50,475 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/b64ec615858fc226e895f7ac82106fc1/recovered.edits/4.seqid to hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/archive/data/default/Group_testDisabledTableMove/b64ec615858fc226e895f7ac82106fc1/recovered.edits/4.seqid 2023-07-21 18:14:50,477 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/6dc84ab56db4baa558feb6e08391f81a/recovered.edits/4.seqid to hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/archive/data/default/Group_testDisabledTableMove/6dc84ab56db4baa558feb6e08391f81a/recovered.edits/4.seqid 2023-07-21 18:14:50,477 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/0474ff2b8ef881b5f0eea076f1993b72 2023-07-21 18:14:50,477 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/b64ec615858fc226e895f7ac82106fc1 2023-07-21 18:14:50,477 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/b86fd58c5a419a5cb8d1f07350aa7f5e/recovered.edits/4.seqid to hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/archive/data/default/Group_testDisabledTableMove/b86fd58c5a419a5cb8d1f07350aa7f5e/recovered.edits/4.seqid 2023-07-21 18:14:50,478 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/f279e2defcb6e80e7325c06238ef6f71/recovered.edits/4.seqid to hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/archive/data/default/Group_testDisabledTableMove/f279e2defcb6e80e7325c06238ef6f71/recovered.edits/4.seqid 2023-07-21 18:14:50,478 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/6dc84ab56db4baa558feb6e08391f81a 2023-07-21 18:14:50,478 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/b86fd58c5a419a5cb8d1f07350aa7f5e 2023-07-21 18:14:50,478 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/.tmp/data/default/Group_testDisabledTableMove/f279e2defcb6e80e7325c06238ef6f71 2023-07-21 18:14:50,478 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-21 18:14:50,481 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=155, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 18:14:50,484 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-21 18:14:50,490 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-21 18:14:50,491 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=155, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 18:14:50,491 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-21 18:14:50,492 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689963289506.0474ff2b8ef881b5f0eea076f1993b72.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689963290492"}]},"ts":"9223372036854775807"} 2023-07-21 18:14:50,492 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689963289506.f279e2defcb6e80e7325c06238ef6f71.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689963290492"}]},"ts":"9223372036854775807"} 2023-07-21 18:14:50,492 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689963289506.6dc84ab56db4baa558feb6e08391f81a.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689963290492"}]},"ts":"9223372036854775807"} 2023-07-21 18:14:50,492 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689963289506.b86fd58c5a419a5cb8d1f07350aa7f5e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689963290492"}]},"ts":"9223372036854775807"} 2023-07-21 18:14:50,492 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689963289506.b64ec615858fc226e895f7ac82106fc1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689963290492"}]},"ts":"9223372036854775807"} 2023-07-21 18:14:50,494 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-21 18:14:50,494 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 0474ff2b8ef881b5f0eea076f1993b72, NAME => 'Group_testDisabledTableMove,,1689963289506.0474ff2b8ef881b5f0eea076f1993b72.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => f279e2defcb6e80e7325c06238ef6f71, NAME => 'Group_testDisabledTableMove,aaaaa,1689963289506.f279e2defcb6e80e7325c06238ef6f71.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 6dc84ab56db4baa558feb6e08391f81a, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689963289506.6dc84ab56db4baa558feb6e08391f81a.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => b86fd58c5a419a5cb8d1f07350aa7f5e, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689963289506.b86fd58c5a419a5cb8d1f07350aa7f5e.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => b64ec615858fc226e895f7ac82106fc1, NAME => 'Group_testDisabledTableMove,zzzzz,1689963289506.b64ec615858fc226e895f7ac82106fc1.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-21 18:14:50,494 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-21 18:14:50,494 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689963290494"}]},"ts":"9223372036854775807"} 2023-07-21 18:14:50,496 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-21 18:14:50,498 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=155, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 18:14:50,499 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=155, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 47 msec 2023-07-21 18:14:50,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-21 18:14:50,561 INFO [Listener at localhost/36435] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 155 completed 2023-07-21 18:14:50,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:50,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:50,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:14:50,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:14:50,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:50,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419] to rsgroup default 2023-07-21 18:14:50,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2143274979 2023-07-21 18:14:50,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:50,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:50,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:14:50,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_2143274979, current retry=0 2023-07-21 18:14:50,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41863,1689963267427, jenkins-hbase4.apache.org,43419,1689963263425] are moved back to Group_testDisabledTableMove_2143274979 2023-07-21 18:14:50,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_2143274979 => default 2023-07-21 18:14:50,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:50,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_2143274979 2023-07-21 18:14:50,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:50,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:50,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 18:14:50,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:14:50,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:14:50,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:14:50,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:50,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 18:14:50,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:50,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 18:14:50,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:50,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 18:14:50,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:14:50,603 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 18:14:50,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 18:14:50,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:50,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:50,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:14:50,609 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:14:50,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:50,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:50,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45593] to rsgroup master 2023-07-21 18:14:50,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:50,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 956 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53692 deadline: 1689964490614, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. 2023-07-21 18:14:50,615 WARN [Listener at localhost/36435] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 18:14:50,616 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:50,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:50,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:50,618 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419, jenkins-hbase4.apache.org:44049, jenkins-hbase4.apache.org:46437], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 18:14:50,618 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:14:50,618 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:50,637 INFO [Listener at localhost/36435] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=523 (was 521) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2138724152_17 at /127.0.0.1:51358 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x54c19f7b-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x576859ba-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_147856072_17 at /127.0.0.1:41888 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=823 (was 799) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=561 (was 561), ProcessCount=174 (was 174), AvailableMemoryMB=7430 (was 7440) 2023-07-21 18:14:50,637 WARN [Listener at localhost/36435] hbase.ResourceChecker(130): Thread=523 is superior to 500 2023-07-21 18:14:50,655 INFO [Listener at localhost/36435] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=523, OpenFileDescriptor=823, MaxFileDescriptor=60000, SystemLoadAverage=561, ProcessCount=174, AvailableMemoryMB=7431 2023-07-21 18:14:50,655 WARN [Listener at localhost/36435] hbase.ResourceChecker(130): Thread=523 is superior to 500 2023-07-21 18:14:50,655 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-21 18:14:50,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:50,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:50,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:14:50,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:14:50,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:14:50,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 18:14:50,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:14:50,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 18:14:50,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:50,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 18:14:50,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:14:50,675 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 18:14:50,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 18:14:50,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:50,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:14:50,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:14:50,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:14:50,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:50,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:50,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45593] to rsgroup master 2023-07-21 18:14:50,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:14:50,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] ipc.CallRunner(144): callId: 984 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53692 deadline: 1689964490687, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. 2023-07-21 18:14:50,688 WARN [Listener at localhost/36435] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45593 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 18:14:50,690 INFO [Listener at localhost/36435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:50,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:50,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:50,691 INFO [Listener at localhost/36435] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41863, jenkins-hbase4.apache.org:43419, jenkins-hbase4.apache.org:44049, jenkins-hbase4.apache.org:46437], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 18:14:50,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:14:50,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45593] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:14:50,693 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-21 18:14:50,693 INFO [Listener at localhost/36435] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-21 18:14:50,693 DEBUG [Listener at localhost/36435] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x24795960 to 127.0.0.1:64847 2023-07-21 18:14:50,693 DEBUG [Listener at localhost/36435] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:14:50,694 DEBUG [Listener at localhost/36435] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-21 18:14:50,694 DEBUG [Listener at localhost/36435] util.JVMClusterUtil(257): Found active master hash=1328407298, stopped=false 2023-07-21 18:14:50,695 DEBUG [Listener at localhost/36435] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 18:14:50,695 DEBUG [Listener at localhost/36435] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 18:14:50,695 INFO [Listener at localhost/36435] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,45593,1689963261589 2023-07-21 18:14:50,698 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:43419-0x10189176e190001, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 18:14:50,698 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:44049-0x10189176e190003, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 18:14:50,698 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 18:14:50,698 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:14:50,698 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:41863-0x10189176e19000b, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 18:14:50,698 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:46437-0x10189176e190002, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 18:14:50,699 INFO [Listener at localhost/36435] procedure2.ProcedureExecutor(629): Stopping 2023-07-21 18:14:50,699 DEBUG [Listener at localhost/36435] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x08bf68d6 to 127.0.0.1:64847 2023-07-21 18:14:50,699 DEBUG [Listener at localhost/36435] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:14:50,700 INFO [Listener at localhost/36435] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43419,1689963263425' ***** 2023-07-21 18:14:50,700 INFO [Listener at localhost/36435] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 18:14:50,700 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44049-0x10189176e190003, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 18:14:50,700 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 18:14:50,700 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43419-0x10189176e190001, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 18:14:50,700 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46437-0x10189176e190002, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 18:14:50,700 INFO [Listener at localhost/36435] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46437,1689963263715' ***** 2023-07-21 18:14:50,700 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41863-0x10189176e19000b, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 18:14:50,700 INFO [Listener at localhost/36435] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 18:14:50,700 INFO [RS:0;jenkins-hbase4:43419] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 18:14:50,700 INFO [Listener at localhost/36435] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44049,1689963263942' ***** 2023-07-21 18:14:50,701 INFO [Listener at localhost/36435] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 18:14:50,700 INFO [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 18:14:50,701 INFO [Listener at localhost/36435] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41863,1689963267427' ***** 2023-07-21 18:14:50,701 INFO [Listener at localhost/36435] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 18:14:50,701 INFO [RS:2;jenkins-hbase4:44049] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 18:14:50,701 INFO [RS:3;jenkins-hbase4:41863] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 18:14:50,708 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 18:14:50,710 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 18:14:50,710 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 18:14:50,710 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 18:14:50,710 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 18:14:50,710 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 18:14:50,708 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 18:14:50,710 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 18:14:50,718 INFO [RS:3;jenkins-hbase4:41863] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1e4b632f{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 18:14:50,718 INFO [RS:1;jenkins-hbase4:46437] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1d5b0a74{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 18:14:50,718 INFO [RS:2;jenkins-hbase4:44049] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@108a780c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 18:14:50,718 INFO [RS:0;jenkins-hbase4:43419] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@41ccb499{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 18:14:50,722 INFO [RS:1;jenkins-hbase4:46437] server.AbstractConnector(383): Stopped ServerConnector@34693676{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 18:14:50,722 INFO [RS:3;jenkins-hbase4:41863] server.AbstractConnector(383): Stopped ServerConnector@b0e8cf{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 18:14:50,722 INFO [RS:0;jenkins-hbase4:43419] server.AbstractConnector(383): Stopped ServerConnector@2f3b3e4c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 18:14:50,722 INFO [RS:2;jenkins-hbase4:44049] server.AbstractConnector(383): Stopped ServerConnector@59373ab8{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 18:14:50,723 INFO [RS:0;jenkins-hbase4:43419] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 18:14:50,723 INFO [RS:3;jenkins-hbase4:41863] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 18:14:50,723 INFO [RS:1;jenkins-hbase4:46437] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 18:14:50,723 INFO [RS:2;jenkins-hbase4:44049] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 18:14:50,725 INFO [RS:0;jenkins-hbase4:43419] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1dd1c21b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 18:14:50,726 INFO [RS:1;jenkins-hbase4:46437] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4eb867fa{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 18:14:50,726 INFO [RS:3;jenkins-hbase4:41863] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6e82bdde{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 18:14:50,728 INFO [RS:1;jenkins-hbase4:46437] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@10a468c0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/hadoop.log.dir/,STOPPED} 2023-07-21 18:14:50,728 INFO [RS:0;jenkins-hbase4:43419] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1af304e9{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/hadoop.log.dir/,STOPPED} 2023-07-21 18:14:50,727 INFO [RS:2;jenkins-hbase4:44049] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4cd52d74{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 18:14:50,729 INFO [RS:3;jenkins-hbase4:41863] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6893d3c2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/hadoop.log.dir/,STOPPED} 2023-07-21 18:14:50,730 INFO [RS:2;jenkins-hbase4:44049] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6ca31f38{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/hadoop.log.dir/,STOPPED} 2023-07-21 18:14:50,732 INFO [RS:1;jenkins-hbase4:46437] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 18:14:50,732 INFO [RS:0;jenkins-hbase4:43419] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 18:14:50,732 INFO [RS:1;jenkins-hbase4:46437] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 18:14:50,732 INFO [RS:0;jenkins-hbase4:43419] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 18:14:50,733 INFO [RS:0;jenkins-hbase4:43419] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 18:14:50,732 INFO [RS:1;jenkins-hbase4:46437] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 18:14:50,733 INFO [RS:0;jenkins-hbase4:43419] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:50,733 INFO [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(3305): Received CLOSE for 205e1c24c493094d8d96bedf6e852764 2023-07-21 18:14:50,733 DEBUG [RS:0;jenkins-hbase4:43419] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6a0fc3fa to 127.0.0.1:64847 2023-07-21 18:14:50,733 DEBUG [RS:0;jenkins-hbase4:43419] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:14:50,733 INFO [RS:0;jenkins-hbase4:43419] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43419,1689963263425; all regions closed. 2023-07-21 18:14:50,733 INFO [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(3305): Received CLOSE for 17cd69e9cdda513d9c4530910b66d92e 2023-07-21 18:14:50,733 INFO [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(3305): Received CLOSE for 2a5ec5469486ef5b01d5318bdbcbddf7 2023-07-21 18:14:50,734 INFO [RS:2;jenkins-hbase4:44049] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 18:14:50,734 INFO [RS:2;jenkins-hbase4:44049] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 18:14:50,734 INFO [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:50,734 INFO [RS:3;jenkins-hbase4:41863] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 18:14:50,735 DEBUG [RS:1;jenkins-hbase4:46437] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x00574c1a to 127.0.0.1:64847 2023-07-21 18:14:50,735 INFO [RS:3;jenkins-hbase4:41863] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 18:14:50,735 DEBUG [RS:1;jenkins-hbase4:46437] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:14:50,734 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 205e1c24c493094d8d96bedf6e852764, disabling compactions & flushes 2023-07-21 18:14:50,735 INFO [RS:1;jenkins-hbase4:46437] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 18:14:50,735 INFO [RS:3;jenkins-hbase4:41863] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 18:14:50,735 INFO [RS:3;jenkins-hbase4:41863] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:50,735 DEBUG [RS:3;jenkins-hbase4:41863] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x694ffec5 to 127.0.0.1:64847 2023-07-21 18:14:50,735 DEBUG [RS:3;jenkins-hbase4:41863] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:14:50,735 INFO [RS:3;jenkins-hbase4:41863] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41863,1689963267427; all regions closed. 2023-07-21 18:14:50,735 INFO [RS:2;jenkins-hbase4:44049] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 18:14:50,735 INFO [RS:1;jenkins-hbase4:46437] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 18:14:50,736 INFO [RS:2;jenkins-hbase4:44049] regionserver.HRegionServer(3305): Received CLOSE for b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:50,735 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. 2023-07-21 18:14:50,736 INFO [RS:1;jenkins-hbase4:46437] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 18:14:50,736 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. 2023-07-21 18:14:50,736 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. after waiting 0 ms 2023-07-21 18:14:50,736 INFO [RS:2;jenkins-hbase4:44049] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:50,736 INFO [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-21 18:14:50,736 DEBUG [RS:2;jenkins-hbase4:44049] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7edc3cc7 to 127.0.0.1:64847 2023-07-21 18:14:50,736 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. 2023-07-21 18:14:50,736 DEBUG [RS:2;jenkins-hbase4:44049] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:14:50,736 INFO [RS:2;jenkins-hbase4:44049] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 18:14:50,737 DEBUG [RS:2;jenkins-hbase4:44049] regionserver.HRegionServer(1478): Online Regions={b673e11a35285324ee5d9a3e17b12d76=testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76.} 2023-07-21 18:14:50,737 DEBUG [RS:2;jenkins-hbase4:44049] regionserver.HRegionServer(1504): Waiting on b673e11a35285324ee5d9a3e17b12d76 2023-07-21 18:14:50,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b673e11a35285324ee5d9a3e17b12d76, disabling compactions & flushes 2023-07-21 18:14:50,738 INFO [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-21 18:14:50,738 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. 2023-07-21 18:14:50,738 DEBUG [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1478): Online Regions={205e1c24c493094d8d96bedf6e852764=unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764., 17cd69e9cdda513d9c4530910b66d92e=hbase:rsgroup,,1689963266552.17cd69e9cdda513d9c4530910b66d92e., 2a5ec5469486ef5b01d5318bdbcbddf7=hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7., 1588230740=hbase:meta,,1.1588230740} 2023-07-21 18:14:50,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. 2023-07-21 18:14:50,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. after waiting 0 ms 2023-07-21 18:14:50,738 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 18:14:50,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. 2023-07-21 18:14:50,738 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 18:14:50,738 DEBUG [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1504): Waiting on 1588230740, 17cd69e9cdda513d9c4530910b66d92e, 205e1c24c493094d8d96bedf6e852764, 2a5ec5469486ef5b01d5318bdbcbddf7 2023-07-21 18:14:50,739 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 18:14:50,739 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 18:14:50,739 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 18:14:50,739 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=37.48 KB heapSize=61.13 KB 2023-07-21 18:14:50,750 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/WALs/jenkins-hbase4.apache.org,43419,1689963263425/jenkins-hbase4.apache.org%2C43419%2C1689963263425.meta.1689963266181.meta not finished, retry = 0 2023-07-21 18:14:50,754 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/unmovedTable/205e1c24c493094d8d96bedf6e852764/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-21 18:14:50,755 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. 2023-07-21 18:14:50,755 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 205e1c24c493094d8d96bedf6e852764: 2023-07-21 18:14:50,755 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689963285506.205e1c24c493094d8d96bedf6e852764. 2023-07-21 18:14:50,755 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 17cd69e9cdda513d9c4530910b66d92e, disabling compactions & flushes 2023-07-21 18:14:50,756 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689963266552.17cd69e9cdda513d9c4530910b66d92e. 2023-07-21 18:14:50,756 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689963266552.17cd69e9cdda513d9c4530910b66d92e. 2023-07-21 18:14:50,756 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689963266552.17cd69e9cdda513d9c4530910b66d92e. after waiting 0 ms 2023-07-21 18:14:50,756 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689963266552.17cd69e9cdda513d9c4530910b66d92e. 2023-07-21 18:14:50,756 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 17cd69e9cdda513d9c4530910b66d92e 1/1 column families, dataSize=28.46 KB heapSize=46.80 KB 2023-07-21 18:14:50,763 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/default/testRename/b673e11a35285324ee5d9a3e17b12d76/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-21 18:14:50,764 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. 2023-07-21 18:14:50,764 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b673e11a35285324ee5d9a3e17b12d76: 2023-07-21 18:14:50,764 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689963283839.b673e11a35285324ee5d9a3e17b12d76. 2023-07-21 18:14:50,765 DEBUG [RS:3;jenkins-hbase4:41863] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/oldWALs 2023-07-21 18:14:50,765 INFO [RS:3;jenkins-hbase4:41863] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41863%2C1689963267427:(num 1689963267931) 2023-07-21 18:14:50,765 DEBUG [RS:3;jenkins-hbase4:41863] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:14:50,766 INFO [RS:3;jenkins-hbase4:41863] regionserver.LeaseManager(133): Closed leases 2023-07-21 18:14:50,766 INFO [RS:3;jenkins-hbase4:41863] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 18:14:50,766 INFO [RS:3;jenkins-hbase4:41863] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 18:14:50,766 INFO [RS:3;jenkins-hbase4:41863] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 18:14:50,766 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 18:14:50,766 INFO [RS:3;jenkins-hbase4:41863] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 18:14:50,768 INFO [RS:3;jenkins-hbase4:41863] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41863 2023-07-21 18:14:50,776 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:43419-0x10189176e190001, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:50,776 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:46437-0x10189176e190002, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:50,776 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:44049-0x10189176e190003, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:50,776 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:46437-0x10189176e190002, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:14:50,776 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:44049-0x10189176e190003, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:14:50,776 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:14:50,776 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:43419-0x10189176e190001, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:14:50,776 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:41863-0x10189176e19000b, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41863,1689963267427 2023-07-21 18:14:50,777 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:41863-0x10189176e19000b, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:14:50,777 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41863,1689963267427] 2023-07-21 18:14:50,777 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41863,1689963267427; numProcessing=1 2023-07-21 18:14:50,778 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41863,1689963267427 already deleted, retry=false 2023-07-21 18:14:50,778 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41863,1689963267427 expired; onlineServers=3 2023-07-21 18:14:50,803 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-21 18:14:50,803 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-21 18:14:50,804 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-21 18:14:50,805 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-21 18:14:50,805 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=28.46 KB at sequenceid=95 (bloomFilter=true), to=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/rsgroup/17cd69e9cdda513d9c4530910b66d92e/.tmp/m/daf609ea83404cb7ad8dbef82ce10a63 2023-07-21 18:14:50,810 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=34.56 KB at sequenceid=212 (bloomFilter=false), to=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/.tmp/info/048b7f383a514304a77659cc5ba1cce0 2023-07-21 18:14:50,815 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for daf609ea83404cb7ad8dbef82ce10a63 2023-07-21 18:14:50,816 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/rsgroup/17cd69e9cdda513d9c4530910b66d92e/.tmp/m/daf609ea83404cb7ad8dbef82ce10a63 as hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/rsgroup/17cd69e9cdda513d9c4530910b66d92e/m/daf609ea83404cb7ad8dbef82ce10a63 2023-07-21 18:14:50,820 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 048b7f383a514304a77659cc5ba1cce0 2023-07-21 18:14:50,828 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for daf609ea83404cb7ad8dbef82ce10a63 2023-07-21 18:14:50,828 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/rsgroup/17cd69e9cdda513d9c4530910b66d92e/m/daf609ea83404cb7ad8dbef82ce10a63, entries=28, sequenceid=95, filesize=6.1 K 2023-07-21 18:14:50,829 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~28.46 KB/29145, heapSize ~46.78 KB/47904, currentSize=0 B/0 for 17cd69e9cdda513d9c4530910b66d92e in 73ms, sequenceid=95, compaction requested=false 2023-07-21 18:14:50,839 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/rsgroup/17cd69e9cdda513d9c4530910b66d92e/recovered.edits/98.seqid, newMaxSeqId=98, maxSeqId=1 2023-07-21 18:14:50,840 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 18:14:50,844 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689963266552.17cd69e9cdda513d9c4530910b66d92e. 2023-07-21 18:14:50,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 17cd69e9cdda513d9c4530910b66d92e: 2023-07-21 18:14:50,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689963266552.17cd69e9cdda513d9c4530910b66d92e. 2023-07-21 18:14:50,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2a5ec5469486ef5b01d5318bdbcbddf7, disabling compactions & flushes 2023-07-21 18:14:50,844 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7. 2023-07-21 18:14:50,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7. 2023-07-21 18:14:50,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7. after waiting 0 ms 2023-07-21 18:14:50,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7. 2023-07-21 18:14:50,854 DEBUG [RS:0;jenkins-hbase4:43419] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/oldWALs 2023-07-21 18:14:50,854 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=868 B at sequenceid=212 (bloomFilter=false), to=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/.tmp/rep_barrier/aaaf3097f4ab4e5292a371507615b826 2023-07-21 18:14:50,854 INFO [RS:0;jenkins-hbase4:43419] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43419%2C1689963263425.meta:.meta(num 1689963266181) 2023-07-21 18:14:50,854 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/namespace/2a5ec5469486ef5b01d5318bdbcbddf7/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-21 18:14:50,856 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7. 2023-07-21 18:14:50,856 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2a5ec5469486ef5b01d5318bdbcbddf7: 2023-07-21 18:14:50,856 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689963266456.2a5ec5469486ef5b01d5318bdbcbddf7. 2023-07-21 18:14:50,868 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for aaaf3097f4ab4e5292a371507615b826 2023-07-21 18:14:50,876 DEBUG [RS:0;jenkins-hbase4:43419] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/oldWALs 2023-07-21 18:14:50,876 INFO [RS:0;jenkins-hbase4:43419] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43419%2C1689963263425:(num 1689963266059) 2023-07-21 18:14:50,876 DEBUG [RS:0;jenkins-hbase4:43419] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:14:50,876 INFO [RS:0;jenkins-hbase4:43419] regionserver.LeaseManager(133): Closed leases 2023-07-21 18:14:50,878 INFO [RS:0;jenkins-hbase4:43419] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 18:14:50,879 INFO [RS:0;jenkins-hbase4:43419] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 18:14:50,879 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 18:14:50,879 INFO [RS:0;jenkins-hbase4:43419] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 18:14:50,879 INFO [RS:0;jenkins-hbase4:43419] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 18:14:50,880 INFO [RS:0;jenkins-hbase4:43419] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43419 2023-07-21 18:14:50,885 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:44049-0x10189176e190003, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:50,885 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:14:50,885 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:46437-0x10189176e190002, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:50,885 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:43419-0x10189176e190001, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43419,1689963263425 2023-07-21 18:14:50,887 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43419,1689963263425] 2023-07-21 18:14:50,887 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43419,1689963263425; numProcessing=2 2023-07-21 18:14:50,889 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43419,1689963263425 already deleted, retry=false 2023-07-21 18:14:50,889 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43419,1689963263425 expired; onlineServers=2 2023-07-21 18:14:50,899 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:41863-0x10189176e19000b, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:14:50,899 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:41863-0x10189176e19000b, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:14:50,907 INFO [RS:3;jenkins-hbase4:41863] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41863,1689963267427; zookeeper connection closed. 2023-07-21 18:14:50,911 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2832d2a3] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2832d2a3 2023-07-21 18:14:50,911 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.07 KB at sequenceid=212 (bloomFilter=false), to=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/.tmp/table/8e9374f2170d4c169a234cd7698892da 2023-07-21 18:14:50,917 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8e9374f2170d4c169a234cd7698892da 2023-07-21 18:14:50,918 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/.tmp/info/048b7f383a514304a77659cc5ba1cce0 as hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/info/048b7f383a514304a77659cc5ba1cce0 2023-07-21 18:14:50,925 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 048b7f383a514304a77659cc5ba1cce0 2023-07-21 18:14:50,925 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/info/048b7f383a514304a77659cc5ba1cce0, entries=62, sequenceid=212, filesize=11.9 K 2023-07-21 18:14:50,926 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/.tmp/rep_barrier/aaaf3097f4ab4e5292a371507615b826 as hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/rep_barrier/aaaf3097f4ab4e5292a371507615b826 2023-07-21 18:14:50,933 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for aaaf3097f4ab4e5292a371507615b826 2023-07-21 18:14:50,934 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/rep_barrier/aaaf3097f4ab4e5292a371507615b826, entries=8, sequenceid=212, filesize=5.8 K 2023-07-21 18:14:50,935 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/.tmp/table/8e9374f2170d4c169a234cd7698892da as hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/table/8e9374f2170d4c169a234cd7698892da 2023-07-21 18:14:50,937 INFO [RS:2;jenkins-hbase4:44049] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44049,1689963263942; all regions closed. 2023-07-21 18:14:50,939 DEBUG [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-21 18:14:50,964 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8e9374f2170d4c169a234cd7698892da 2023-07-21 18:14:50,964 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/table/8e9374f2170d4c169a234cd7698892da, entries=16, sequenceid=212, filesize=6.0 K 2023-07-21 18:14:50,967 DEBUG [RS:2;jenkins-hbase4:44049] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/oldWALs 2023-07-21 18:14:50,967 INFO [RS:2;jenkins-hbase4:44049] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44049%2C1689963263942.meta:.meta(num 1689963268782) 2023-07-21 18:14:50,972 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~37.48 KB/38382, heapSize ~61.08 KB/62544, currentSize=0 B/0 for 1588230740 in 233ms, sequenceid=212, compaction requested=true 2023-07-21 18:14:50,972 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-21 18:14:50,994 DEBUG [RS:2;jenkins-hbase4:44049] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/oldWALs 2023-07-21 18:14:50,995 INFO [RS:2;jenkins-hbase4:44049] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44049%2C1689963263942:(num 1689963266059) 2023-07-21 18:14:50,996 DEBUG [RS:2;jenkins-hbase4:44049] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:14:50,996 INFO [RS:2;jenkins-hbase4:44049] regionserver.LeaseManager(133): Closed leases 2023-07-21 18:14:50,997 INFO [RS:2;jenkins-hbase4:44049] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 18:14:50,997 INFO [RS:2;jenkins-hbase4:44049] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 18:14:50,997 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 18:14:50,997 INFO [RS:2;jenkins-hbase4:44049] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 18:14:50,997 INFO [RS:2;jenkins-hbase4:44049] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 18:14:50,998 INFO [RS:2;jenkins-hbase4:44049] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44049 2023-07-21 18:14:51,000 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:46437-0x10189176e190002, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:51,000 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:14:51,000 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:44049-0x10189176e190003, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44049,1689963263942 2023-07-21 18:14:51,002 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44049,1689963263942] 2023-07-21 18:14:51,002 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44049,1689963263942; numProcessing=3 2023-07-21 18:14:51,003 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44049,1689963263942 already deleted, retry=false 2023-07-21 18:14:51,003 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44049,1689963263942 expired; onlineServers=1 2023-07-21 18:14:51,007 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/data/hbase/meta/1588230740/recovered.edits/215.seqid, newMaxSeqId=215, maxSeqId=100 2023-07-21 18:14:51,008 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 18:14:51,008 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 18:14:51,008 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 18:14:51,008 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-21 18:14:51,139 INFO [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46437,1689963263715; all regions closed. 2023-07-21 18:14:51,152 DEBUG [RS:1;jenkins-hbase4:46437] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/oldWALs 2023-07-21 18:14:51,152 INFO [RS:1;jenkins-hbase4:46437] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46437%2C1689963263715.meta:.meta(num 1689963275362) 2023-07-21 18:14:51,167 DEBUG [RS:1;jenkins-hbase4:46437] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/oldWALs 2023-07-21 18:14:51,167 INFO [RS:1;jenkins-hbase4:46437] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46437%2C1689963263715:(num 1689963266059) 2023-07-21 18:14:51,167 DEBUG [RS:1;jenkins-hbase4:46437] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:14:51,167 INFO [RS:1;jenkins-hbase4:46437] regionserver.LeaseManager(133): Closed leases 2023-07-21 18:14:51,167 INFO [RS:1;jenkins-hbase4:46437] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 18:14:51,167 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 18:14:51,168 INFO [RS:1;jenkins-hbase4:46437] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46437 2023-07-21 18:14:51,171 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:46437-0x10189176e190002, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46437,1689963263715 2023-07-21 18:14:51,171 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:14:51,172 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46437,1689963263715] 2023-07-21 18:14:51,173 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46437,1689963263715; numProcessing=4 2023-07-21 18:14:51,174 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46437,1689963263715 already deleted, retry=false 2023-07-21 18:14:51,174 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46437,1689963263715 expired; onlineServers=0 2023-07-21 18:14:51,174 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45593,1689963261589' ***** 2023-07-21 18:14:51,174 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-21 18:14:51,175 DEBUG [M:0;jenkins-hbase4:45593] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@e10e939, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 18:14:51,175 INFO [M:0;jenkins-hbase4:45593] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 18:14:51,179 INFO [M:0;jenkins-hbase4:45593] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@13466a5d{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 18:14:51,179 INFO [M:0;jenkins-hbase4:45593] server.AbstractConnector(383): Stopped ServerConnector@61a45427{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 18:14:51,179 INFO [M:0;jenkins-hbase4:45593] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 18:14:51,180 INFO [M:0;jenkins-hbase4:45593] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@ed7d79c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 18:14:51,180 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-21 18:14:51,180 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:14:51,181 INFO [M:0;jenkins-hbase4:45593] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5ae16b10{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/hadoop.log.dir/,STOPPED} 2023-07-21 18:14:51,181 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 18:14:51,181 INFO [M:0;jenkins-hbase4:45593] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45593,1689963261589 2023-07-21 18:14:51,181 INFO [M:0;jenkins-hbase4:45593] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45593,1689963261589; all regions closed. 2023-07-21 18:14:51,181 DEBUG [M:0;jenkins-hbase4:45593] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:14:51,181 INFO [M:0;jenkins-hbase4:45593] master.HMaster(1491): Stopping master jetty server 2023-07-21 18:14:51,182 INFO [M:0;jenkins-hbase4:45593] server.AbstractConnector(383): Stopped ServerConnector@1bef6a3b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 18:14:51,183 DEBUG [M:0;jenkins-hbase4:45593] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-21 18:14:51,183 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-21 18:14:51,183 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689963265455] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689963265455,5,FailOnTimeoutGroup] 2023-07-21 18:14:51,183 DEBUG [M:0;jenkins-hbase4:45593] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-21 18:14:51,183 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689963265453] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689963265453,5,FailOnTimeoutGroup] 2023-07-21 18:14:51,184 INFO [M:0;jenkins-hbase4:45593] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-21 18:14:51,184 INFO [M:0;jenkins-hbase4:45593] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-21 18:14:51,184 INFO [M:0;jenkins-hbase4:45593] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-21 18:14:51,184 DEBUG [M:0;jenkins-hbase4:45593] master.HMaster(1512): Stopping service threads 2023-07-21 18:14:51,184 INFO [M:0;jenkins-hbase4:45593] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-21 18:14:51,184 ERROR [M:0;jenkins-hbase4:45593] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-21 18:14:51,185 INFO [M:0;jenkins-hbase4:45593] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-21 18:14:51,186 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-21 18:14:51,186 DEBUG [M:0;jenkins-hbase4:45593] zookeeper.ZKUtil(398): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-21 18:14:51,186 WARN [M:0;jenkins-hbase4:45593] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-21 18:14:51,186 INFO [M:0;jenkins-hbase4:45593] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-21 18:14:51,186 INFO [M:0;jenkins-hbase4:45593] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-21 18:14:51,186 DEBUG [M:0;jenkins-hbase4:45593] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 18:14:51,186 INFO [M:0;jenkins-hbase4:45593] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 18:14:51,186 DEBUG [M:0;jenkins-hbase4:45593] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 18:14:51,186 DEBUG [M:0;jenkins-hbase4:45593] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 18:14:51,187 DEBUG [M:0;jenkins-hbase4:45593] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 18:14:51,187 INFO [M:0;jenkins-hbase4:45593] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=518.82 KB heapSize=620.89 KB 2023-07-21 18:14:51,212 INFO [M:0;jenkins-hbase4:45593] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=518.82 KB at sequenceid=1152 (bloomFilter=true), to=hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/d68375ba85b04fdf91837b26572fe077 2023-07-21 18:14:51,221 DEBUG [M:0;jenkins-hbase4:45593] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/d68375ba85b04fdf91837b26572fe077 as hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/d68375ba85b04fdf91837b26572fe077 2023-07-21 18:14:51,228 INFO [M:0;jenkins-hbase4:45593] regionserver.HStore(1080): Added hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/d68375ba85b04fdf91837b26572fe077, entries=154, sequenceid=1152, filesize=27.1 K 2023-07-21 18:14:51,229 INFO [M:0;jenkins-hbase4:45593] regionserver.HRegion(2948): Finished flush of dataSize ~518.82 KB/531272, heapSize ~620.88 KB/635776, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 42ms, sequenceid=1152, compaction requested=false 2023-07-21 18:14:51,231 INFO [M:0;jenkins-hbase4:45593] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 18:14:51,231 DEBUG [M:0;jenkins-hbase4:45593] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 18:14:51,235 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 18:14:51,235 INFO [M:0;jenkins-hbase4:45593] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-21 18:14:51,235 INFO [M:0;jenkins-hbase4:45593] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45593 2023-07-21 18:14:51,237 DEBUG [M:0;jenkins-hbase4:45593] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,45593,1689963261589 already deleted, retry=false 2023-07-21 18:14:51,501 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:14:51,501 INFO [M:0;jenkins-hbase4:45593] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45593,1689963261589; zookeeper connection closed. 2023-07-21 18:14:51,501 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): master:45593-0x10189176e190000, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:14:51,601 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:46437-0x10189176e190002, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:14:51,601 INFO [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46437,1689963263715; zookeeper connection closed. 2023-07-21 18:14:51,601 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:46437-0x10189176e190002, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:14:51,602 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1ad18bb6] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1ad18bb6 2023-07-21 18:14:51,701 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:44049-0x10189176e190003, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:14:51,701 INFO [RS:2;jenkins-hbase4:44049] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44049,1689963263942; zookeeper connection closed. 2023-07-21 18:14:51,701 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:44049-0x10189176e190003, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:14:51,702 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@8abdc37] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@8abdc37 2023-07-21 18:14:51,776 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 18:14:51,777 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 18:14:51,777 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 18:14:51,802 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:43419-0x10189176e190001, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:14:51,802 INFO [RS:0;jenkins-hbase4:43419] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43419,1689963263425; zookeeper connection closed. 2023-07-21 18:14:51,802 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): regionserver:43419-0x10189176e190001, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:14:51,802 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5d0507bc] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5d0507bc 2023-07-21 18:14:51,803 INFO [Listener at localhost/36435] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-21 18:14:51,803 WARN [Listener at localhost/36435] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 18:14:51,808 INFO [Listener at localhost/36435] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 18:14:51,911 WARN [BP-1274896498-172.31.14.131-1689963257910 heartbeating to localhost/127.0.0.1:37139] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 18:14:51,912 WARN [BP-1274896498-172.31.14.131-1689963257910 heartbeating to localhost/127.0.0.1:37139] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1274896498-172.31.14.131-1689963257910 (Datanode Uuid 5df3cda6-1f99-4214-8482-8fc8dc8e8351) service to localhost/127.0.0.1:37139 2023-07-21 18:14:51,913 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/cluster_3aa84d3a-0a47-1c83-a7d0-4c79af33c847/dfs/data/data5/current/BP-1274896498-172.31.14.131-1689963257910] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 18:14:51,913 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/cluster_3aa84d3a-0a47-1c83-a7d0-4c79af33c847/dfs/data/data6/current/BP-1274896498-172.31.14.131-1689963257910] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 18:14:51,915 WARN [Listener at localhost/36435] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 18:14:51,917 INFO [Listener at localhost/36435] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 18:14:52,020 WARN [BP-1274896498-172.31.14.131-1689963257910 heartbeating to localhost/127.0.0.1:37139] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 18:14:52,020 WARN [BP-1274896498-172.31.14.131-1689963257910 heartbeating to localhost/127.0.0.1:37139] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1274896498-172.31.14.131-1689963257910 (Datanode Uuid e85904a5-7e61-4aaa-8f5c-ce8327889bbf) service to localhost/127.0.0.1:37139 2023-07-21 18:14:52,020 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/cluster_3aa84d3a-0a47-1c83-a7d0-4c79af33c847/dfs/data/data3/current/BP-1274896498-172.31.14.131-1689963257910] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 18:14:52,021 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/cluster_3aa84d3a-0a47-1c83-a7d0-4c79af33c847/dfs/data/data4/current/BP-1274896498-172.31.14.131-1689963257910] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 18:14:52,022 WARN [Listener at localhost/36435] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 18:14:52,025 INFO [Listener at localhost/36435] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 18:14:52,130 WARN [BP-1274896498-172.31.14.131-1689963257910 heartbeating to localhost/127.0.0.1:37139] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 18:14:52,130 WARN [BP-1274896498-172.31.14.131-1689963257910 heartbeating to localhost/127.0.0.1:37139] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1274896498-172.31.14.131-1689963257910 (Datanode Uuid 9db80221-e53d-40d5-bdb6-5e9a8daaef4e) service to localhost/127.0.0.1:37139 2023-07-21 18:14:52,131 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/cluster_3aa84d3a-0a47-1c83-a7d0-4c79af33c847/dfs/data/data1/current/BP-1274896498-172.31.14.131-1689963257910] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 18:14:52,132 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/cluster_3aa84d3a-0a47-1c83-a7d0-4c79af33c847/dfs/data/data2/current/BP-1274896498-172.31.14.131-1689963257910] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 18:14:52,167 INFO [Listener at localhost/36435] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 18:14:52,288 INFO [Listener at localhost/36435] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-21 18:14:52,342 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-21 18:14:52,343 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-21 18:14:52,343 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/hadoop.log.dir so I do NOT create it in target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37 2023-07-21 18:14:52,343 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/45fa3700-fc6c-f39c-17df-27dee991fd71/hadoop.tmp.dir so I do NOT create it in target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37 2023-07-21 18:14:52,343 INFO [Listener at localhost/36435] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/cluster_d59f38d1-c0ba-73de-dc1a-6d970ef140a3, deleteOnExit=true 2023-07-21 18:14:52,343 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-21 18:14:52,343 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/test.cache.data in system properties and HBase conf 2023-07-21 18:14:52,343 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/hadoop.tmp.dir in system properties and HBase conf 2023-07-21 18:14:52,343 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/hadoop.log.dir in system properties and HBase conf 2023-07-21 18:14:52,343 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-21 18:14:52,344 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-21 18:14:52,344 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-21 18:14:52,344 DEBUG [Listener at localhost/36435] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-21 18:14:52,344 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-21 18:14:52,344 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-21 18:14:52,344 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-21 18:14:52,344 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 18:14:52,344 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-21 18:14:52,344 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-21 18:14:52,345 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 18:14:52,345 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 18:14:52,345 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-21 18:14:52,345 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/nfs.dump.dir in system properties and HBase conf 2023-07-21 18:14:52,345 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/java.io.tmpdir in system properties and HBase conf 2023-07-21 18:14:52,345 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 18:14:52,345 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-21 18:14:52,345 INFO [Listener at localhost/36435] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-21 18:14:52,349 WARN [Listener at localhost/36435] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 18:14:52,350 WARN [Listener at localhost/36435] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 18:14:52,385 DEBUG [Listener at localhost/36435-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x10189176e19000a, quorum=127.0.0.1:64847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-21 18:14:52,385 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x10189176e19000a, quorum=127.0.0.1:64847, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-21 18:14:52,394 WARN [Listener at localhost/36435] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-21 18:14:52,442 WARN [Listener at localhost/36435] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 18:14:52,444 INFO [Listener at localhost/36435] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 18:14:52,448 INFO [Listener at localhost/36435] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/java.io.tmpdir/Jetty_localhost_35311_hdfs____8tbw63/webapp 2023-07-21 18:14:52,546 INFO [Listener at localhost/36435] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35311 2023-07-21 18:14:52,553 WARN [Listener at localhost/36435] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 18:14:52,554 WARN [Listener at localhost/36435] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 18:14:52,598 WARN [Listener at localhost/42925] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 18:14:52,612 WARN [Listener at localhost/42925] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 18:14:52,614 WARN [Listener at localhost/42925] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 18:14:52,616 INFO [Listener at localhost/42925] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 18:14:52,622 INFO [Listener at localhost/42925] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/java.io.tmpdir/Jetty_localhost_41487_datanode____3kj6nx/webapp 2023-07-21 18:14:52,757 INFO [Listener at localhost/42925] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41487 2023-07-21 18:14:52,766 WARN [Listener at localhost/46669] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 18:14:52,786 WARN [Listener at localhost/46669] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 18:14:52,788 WARN [Listener at localhost/46669] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 18:14:52,789 INFO [Listener at localhost/46669] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 18:14:52,793 INFO [Listener at localhost/46669] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/java.io.tmpdir/Jetty_localhost_35861_datanode____ly6th4/webapp 2023-07-21 18:14:52,883 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xeac54b40b0126f88: Processing first storage report for DS-a864ba33-5f7c-4637-a6ce-37fc55747a23 from datanode 7af9a7ae-a25e-425b-a226-36333ffe6a14 2023-07-21 18:14:52,883 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xeac54b40b0126f88: from storage DS-a864ba33-5f7c-4637-a6ce-37fc55747a23 node DatanodeRegistration(127.0.0.1:34745, datanodeUuid=7af9a7ae-a25e-425b-a226-36333ffe6a14, infoPort=38549, infoSecurePort=0, ipcPort=46669, storageInfo=lv=-57;cid=testClusterID;nsid=744663831;c=1689963292352), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-21 18:14:52,883 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xeac54b40b0126f88: Processing first storage report for DS-1e08c44b-5f82-4c56-b253-469c0f958060 from datanode 7af9a7ae-a25e-425b-a226-36333ffe6a14 2023-07-21 18:14:52,883 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xeac54b40b0126f88: from storage DS-1e08c44b-5f82-4c56-b253-469c0f958060 node DatanodeRegistration(127.0.0.1:34745, datanodeUuid=7af9a7ae-a25e-425b-a226-36333ffe6a14, infoPort=38549, infoSecurePort=0, ipcPort=46669, storageInfo=lv=-57;cid=testClusterID;nsid=744663831;c=1689963292352), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 18:14:52,915 INFO [Listener at localhost/46669] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35861 2023-07-21 18:14:52,923 WARN [Listener at localhost/37039] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 18:14:52,947 WARN [Listener at localhost/37039] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 18:14:52,950 WARN [Listener at localhost/37039] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 18:14:52,952 INFO [Listener at localhost/37039] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 18:14:52,963 INFO [Listener at localhost/37039] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/java.io.tmpdir/Jetty_localhost_39829_datanode____.h74wu8/webapp 2023-07-21 18:14:53,073 INFO [Listener at localhost/37039] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39829 2023-07-21 18:14:53,082 WARN [Listener at localhost/40193] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 18:14:53,152 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x57013b557bca571a: Processing first storage report for DS-e9f00bc9-1322-455d-b177-aca5c3d88506 from datanode 595e82c8-4a9d-4e82-9c27-1545721fa011 2023-07-21 18:14:53,152 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x57013b557bca571a: from storage DS-e9f00bc9-1322-455d-b177-aca5c3d88506 node DatanodeRegistration(127.0.0.1:36007, datanodeUuid=595e82c8-4a9d-4e82-9c27-1545721fa011, infoPort=43595, infoSecurePort=0, ipcPort=37039, storageInfo=lv=-57;cid=testClusterID;nsid=744663831;c=1689963292352), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 18:14:53,152 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x57013b557bca571a: Processing first storage report for DS-5836deee-fe7b-4b47-a218-c258ad5bc477 from datanode 595e82c8-4a9d-4e82-9c27-1545721fa011 2023-07-21 18:14:53,152 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x57013b557bca571a: from storage DS-5836deee-fe7b-4b47-a218-c258ad5bc477 node DatanodeRegistration(127.0.0.1:36007, datanodeUuid=595e82c8-4a9d-4e82-9c27-1545721fa011, infoPort=43595, infoSecurePort=0, ipcPort=37039, storageInfo=lv=-57;cid=testClusterID;nsid=744663831;c=1689963292352), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 18:14:53,423 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x14fd1d2dd03f8387: Processing first storage report for DS-0df0ca97-db2e-433f-a2fa-5b413f2bbef9 from datanode 52389ff1-2798-4118-9a33-3a4b143b6d06 2023-07-21 18:14:53,423 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x14fd1d2dd03f8387: from storage DS-0df0ca97-db2e-433f-a2fa-5b413f2bbef9 node DatanodeRegistration(127.0.0.1:43487, datanodeUuid=52389ff1-2798-4118-9a33-3a4b143b6d06, infoPort=38287, infoSecurePort=0, ipcPort=40193, storageInfo=lv=-57;cid=testClusterID;nsid=744663831;c=1689963292352), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 18:14:53,423 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x14fd1d2dd03f8387: Processing first storage report for DS-e9ff6cfd-81e1-41eb-9371-01ebd95f1a35 from datanode 52389ff1-2798-4118-9a33-3a4b143b6d06 2023-07-21 18:14:53,423 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x14fd1d2dd03f8387: from storage DS-e9ff6cfd-81e1-41eb-9371-01ebd95f1a35 node DatanodeRegistration(127.0.0.1:43487, datanodeUuid=52389ff1-2798-4118-9a33-3a4b143b6d06, infoPort=38287, infoSecurePort=0, ipcPort=40193, storageInfo=lv=-57;cid=testClusterID;nsid=744663831;c=1689963292352), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-21 18:14:53,497 DEBUG [Listener at localhost/40193] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37 2023-07-21 18:14:53,501 INFO [Listener at localhost/40193] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/cluster_d59f38d1-c0ba-73de-dc1a-6d970ef140a3/zookeeper_0, clientPort=51543, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/cluster_d59f38d1-c0ba-73de-dc1a-6d970ef140a3/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/cluster_d59f38d1-c0ba-73de-dc1a-6d970ef140a3/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-21 18:14:53,502 INFO [Listener at localhost/40193] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=51543 2023-07-21 18:14:53,503 INFO [Listener at localhost/40193] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:53,504 INFO [Listener at localhost/40193] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:53,522 INFO [Listener at localhost/40193] util.FSUtils(471): Created version file at hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613 with version=8 2023-07-21 18:14:53,522 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/hbase-staging 2023-07-21 18:14:53,523 DEBUG [Listener at localhost/40193] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-21 18:14:53,523 DEBUG [Listener at localhost/40193] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-21 18:14:53,523 DEBUG [Listener at localhost/40193] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-21 18:14:53,523 DEBUG [Listener at localhost/40193] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-21 18:14:53,524 INFO [Listener at localhost/40193] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 18:14:53,524 INFO [Listener at localhost/40193] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:53,525 INFO [Listener at localhost/40193] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:53,525 INFO [Listener at localhost/40193] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 18:14:53,525 INFO [Listener at localhost/40193] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:53,525 INFO [Listener at localhost/40193] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 18:14:53,525 INFO [Listener at localhost/40193] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 18:14:53,526 INFO [Listener at localhost/40193] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46525 2023-07-21 18:14:53,527 INFO [Listener at localhost/40193] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:53,528 INFO [Listener at localhost/40193] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:53,529 INFO [Listener at localhost/40193] zookeeper.RecoverableZooKeeper(93): Process identifier=master:46525 connecting to ZooKeeper ensemble=127.0.0.1:51543 2023-07-21 18:14:53,541 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:465250x0, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 18:14:53,542 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:46525-0x1018917ee3e0000 connected 2023-07-21 18:14:53,609 DEBUG [Listener at localhost/40193] zookeeper.ZKUtil(164): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 18:14:53,609 DEBUG [Listener at localhost/40193] zookeeper.ZKUtil(164): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 18:14:53,610 DEBUG [Listener at localhost/40193] zookeeper.ZKUtil(164): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 18:14:53,611 DEBUG [Listener at localhost/40193] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46525 2023-07-21 18:14:53,611 DEBUG [Listener at localhost/40193] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46525 2023-07-21 18:14:53,611 DEBUG [Listener at localhost/40193] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46525 2023-07-21 18:14:53,612 DEBUG [Listener at localhost/40193] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46525 2023-07-21 18:14:53,612 DEBUG [Listener at localhost/40193] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46525 2023-07-21 18:14:53,614 INFO [Listener at localhost/40193] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 18:14:53,614 INFO [Listener at localhost/40193] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 18:14:53,614 INFO [Listener at localhost/40193] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 18:14:53,615 INFO [Listener at localhost/40193] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-21 18:14:53,615 INFO [Listener at localhost/40193] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 18:14:53,615 INFO [Listener at localhost/40193] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 18:14:53,615 INFO [Listener at localhost/40193] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 18:14:53,616 INFO [Listener at localhost/40193] http.HttpServer(1146): Jetty bound to port 35355 2023-07-21 18:14:53,616 INFO [Listener at localhost/40193] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 18:14:53,619 INFO [Listener at localhost/40193] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:53,619 INFO [Listener at localhost/40193] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@333c6535{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/hadoop.log.dir/,AVAILABLE} 2023-07-21 18:14:53,620 INFO [Listener at localhost/40193] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:53,620 INFO [Listener at localhost/40193] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7fe18136{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 18:14:53,742 INFO [Listener at localhost/40193] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 18:14:53,743 INFO [Listener at localhost/40193] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 18:14:53,743 INFO [Listener at localhost/40193] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 18:14:53,743 INFO [Listener at localhost/40193] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 18:14:53,744 INFO [Listener at localhost/40193] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:53,745 INFO [Listener at localhost/40193] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@59827a1c{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/java.io.tmpdir/jetty-0_0_0_0-35355-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7079617342131701670/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 18:14:53,747 INFO [Listener at localhost/40193] server.AbstractConnector(333): Started ServerConnector@1aa13f52{HTTP/1.1, (http/1.1)}{0.0.0.0:35355} 2023-07-21 18:14:53,747 INFO [Listener at localhost/40193] server.Server(415): Started @37880ms 2023-07-21 18:14:53,747 INFO [Listener at localhost/40193] master.HMaster(444): hbase.rootdir=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613, hbase.cluster.distributed=false 2023-07-21 18:14:53,766 INFO [Listener at localhost/40193] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 18:14:53,766 INFO [Listener at localhost/40193] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:53,766 INFO [Listener at localhost/40193] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:53,766 INFO [Listener at localhost/40193] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 18:14:53,767 INFO [Listener at localhost/40193] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:53,767 INFO [Listener at localhost/40193] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 18:14:53,767 INFO [Listener at localhost/40193] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 18:14:53,809 INFO [Listener at localhost/40193] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44873 2023-07-21 18:14:53,811 INFO [Listener at localhost/40193] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 18:14:53,818 DEBUG [Listener at localhost/40193] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 18:14:53,820 INFO [Listener at localhost/40193] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:53,821 INFO [Listener at localhost/40193] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:53,822 INFO [Listener at localhost/40193] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44873 connecting to ZooKeeper ensemble=127.0.0.1:51543 2023-07-21 18:14:53,832 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:448730x0, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 18:14:53,834 DEBUG [Listener at localhost/40193] zookeeper.ZKUtil(164): regionserver:448730x0, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 18:14:53,834 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44873-0x1018917ee3e0001 connected 2023-07-21 18:14:53,836 DEBUG [Listener at localhost/40193] zookeeper.ZKUtil(164): regionserver:44873-0x1018917ee3e0001, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 18:14:53,836 DEBUG [Listener at localhost/40193] zookeeper.ZKUtil(164): regionserver:44873-0x1018917ee3e0001, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 18:14:53,838 DEBUG [Listener at localhost/40193] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44873 2023-07-21 18:14:53,841 DEBUG [Listener at localhost/40193] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44873 2023-07-21 18:14:53,841 DEBUG [Listener at localhost/40193] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44873 2023-07-21 18:14:53,846 DEBUG [Listener at localhost/40193] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44873 2023-07-21 18:14:53,847 DEBUG [Listener at localhost/40193] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44873 2023-07-21 18:14:53,849 INFO [Listener at localhost/40193] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 18:14:53,849 INFO [Listener at localhost/40193] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 18:14:53,849 INFO [Listener at localhost/40193] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 18:14:53,849 INFO [Listener at localhost/40193] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 18:14:53,849 INFO [Listener at localhost/40193] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 18:14:53,849 INFO [Listener at localhost/40193] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 18:14:53,850 INFO [Listener at localhost/40193] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 18:14:53,851 INFO [Listener at localhost/40193] http.HttpServer(1146): Jetty bound to port 35875 2023-07-21 18:14:53,851 INFO [Listener at localhost/40193] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 18:14:53,853 INFO [Listener at localhost/40193] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:53,853 INFO [Listener at localhost/40193] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5ebe50f9{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/hadoop.log.dir/,AVAILABLE} 2023-07-21 18:14:53,853 INFO [Listener at localhost/40193] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:53,853 INFO [Listener at localhost/40193] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@37381802{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 18:14:53,989 INFO [Listener at localhost/40193] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 18:14:53,990 INFO [Listener at localhost/40193] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 18:14:53,991 INFO [Listener at localhost/40193] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 18:14:53,991 INFO [Listener at localhost/40193] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 18:14:53,996 INFO [Listener at localhost/40193] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:53,997 INFO [Listener at localhost/40193] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1def00ae{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/java.io.tmpdir/jetty-0_0_0_0-35875-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4793423252742134901/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 18:14:53,999 INFO [Listener at localhost/40193] server.AbstractConnector(333): Started ServerConnector@135b9bfe{HTTP/1.1, (http/1.1)}{0.0.0.0:35875} 2023-07-21 18:14:53,999 INFO [Listener at localhost/40193] server.Server(415): Started @38133ms 2023-07-21 18:14:54,017 INFO [Listener at localhost/40193] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 18:14:54,017 INFO [Listener at localhost/40193] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:54,017 INFO [Listener at localhost/40193] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:54,017 INFO [Listener at localhost/40193] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 18:14:54,017 INFO [Listener at localhost/40193] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:54,018 INFO [Listener at localhost/40193] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 18:14:54,018 INFO [Listener at localhost/40193] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 18:14:54,019 INFO [Listener at localhost/40193] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34289 2023-07-21 18:14:54,019 INFO [Listener at localhost/40193] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 18:14:54,020 DEBUG [Listener at localhost/40193] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 18:14:54,021 INFO [Listener at localhost/40193] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:54,021 INFO [Listener at localhost/40193] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:54,023 INFO [Listener at localhost/40193] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34289 connecting to ZooKeeper ensemble=127.0.0.1:51543 2023-07-21 18:14:54,029 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:342890x0, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 18:14:54,031 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34289-0x1018917ee3e0002 connected 2023-07-21 18:14:54,031 DEBUG [Listener at localhost/40193] zookeeper.ZKUtil(164): regionserver:34289-0x1018917ee3e0002, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 18:14:54,032 DEBUG [Listener at localhost/40193] zookeeper.ZKUtil(164): regionserver:34289-0x1018917ee3e0002, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 18:14:54,032 DEBUG [Listener at localhost/40193] zookeeper.ZKUtil(164): regionserver:34289-0x1018917ee3e0002, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 18:14:54,034 DEBUG [Listener at localhost/40193] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34289 2023-07-21 18:14:54,035 DEBUG [Listener at localhost/40193] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34289 2023-07-21 18:14:54,035 DEBUG [Listener at localhost/40193] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34289 2023-07-21 18:14:54,038 DEBUG [Listener at localhost/40193] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34289 2023-07-21 18:14:54,039 DEBUG [Listener at localhost/40193] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34289 2023-07-21 18:14:54,040 INFO [Listener at localhost/40193] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 18:14:54,040 INFO [Listener at localhost/40193] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 18:14:54,041 INFO [Listener at localhost/40193] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 18:14:54,041 INFO [Listener at localhost/40193] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 18:14:54,041 INFO [Listener at localhost/40193] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 18:14:54,041 INFO [Listener at localhost/40193] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 18:14:54,041 INFO [Listener at localhost/40193] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 18:14:54,042 INFO [Listener at localhost/40193] http.HttpServer(1146): Jetty bound to port 40881 2023-07-21 18:14:54,042 INFO [Listener at localhost/40193] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 18:14:54,045 INFO [Listener at localhost/40193] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:54,045 INFO [Listener at localhost/40193] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@23bc259{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/hadoop.log.dir/,AVAILABLE} 2023-07-21 18:14:54,046 INFO [Listener at localhost/40193] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:54,046 INFO [Listener at localhost/40193] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@29771e0d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 18:14:54,161 INFO [Listener at localhost/40193] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 18:14:54,162 INFO [Listener at localhost/40193] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 18:14:54,162 INFO [Listener at localhost/40193] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 18:14:54,162 INFO [Listener at localhost/40193] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 18:14:54,163 INFO [Listener at localhost/40193] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:54,163 INFO [Listener at localhost/40193] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@36f809c4{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/java.io.tmpdir/jetty-0_0_0_0-40881-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3774840925615157849/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 18:14:54,165 INFO [Listener at localhost/40193] server.AbstractConnector(333): Started ServerConnector@34ebe5e2{HTTP/1.1, (http/1.1)}{0.0.0.0:40881} 2023-07-21 18:14:54,165 INFO [Listener at localhost/40193] server.Server(415): Started @38299ms 2023-07-21 18:14:54,179 INFO [Listener at localhost/40193] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 18:14:54,179 INFO [Listener at localhost/40193] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:54,180 INFO [Listener at localhost/40193] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:54,180 INFO [Listener at localhost/40193] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 18:14:54,180 INFO [Listener at localhost/40193] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:54,180 INFO [Listener at localhost/40193] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 18:14:54,180 INFO [Listener at localhost/40193] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 18:14:54,181 INFO [Listener at localhost/40193] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43427 2023-07-21 18:14:54,181 INFO [Listener at localhost/40193] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 18:14:54,183 DEBUG [Listener at localhost/40193] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 18:14:54,183 INFO [Listener at localhost/40193] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:54,184 INFO [Listener at localhost/40193] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:54,185 INFO [Listener at localhost/40193] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43427 connecting to ZooKeeper ensemble=127.0.0.1:51543 2023-07-21 18:14:54,189 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:434270x0, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 18:14:54,190 DEBUG [Listener at localhost/40193] zookeeper.ZKUtil(164): regionserver:434270x0, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 18:14:54,191 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43427-0x1018917ee3e0003 connected 2023-07-21 18:14:54,191 DEBUG [Listener at localhost/40193] zookeeper.ZKUtil(164): regionserver:43427-0x1018917ee3e0003, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 18:14:54,191 DEBUG [Listener at localhost/40193] zookeeper.ZKUtil(164): regionserver:43427-0x1018917ee3e0003, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 18:14:54,192 DEBUG [Listener at localhost/40193] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43427 2023-07-21 18:14:54,192 DEBUG [Listener at localhost/40193] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43427 2023-07-21 18:14:54,192 DEBUG [Listener at localhost/40193] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43427 2023-07-21 18:14:54,193 DEBUG [Listener at localhost/40193] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43427 2023-07-21 18:14:54,193 DEBUG [Listener at localhost/40193] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43427 2023-07-21 18:14:54,195 INFO [Listener at localhost/40193] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 18:14:54,195 INFO [Listener at localhost/40193] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 18:14:54,195 INFO [Listener at localhost/40193] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 18:14:54,196 INFO [Listener at localhost/40193] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 18:14:54,196 INFO [Listener at localhost/40193] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 18:14:54,196 INFO [Listener at localhost/40193] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 18:14:54,196 INFO [Listener at localhost/40193] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 18:14:54,197 INFO [Listener at localhost/40193] http.HttpServer(1146): Jetty bound to port 34737 2023-07-21 18:14:54,197 INFO [Listener at localhost/40193] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 18:14:54,203 INFO [Listener at localhost/40193] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:54,203 INFO [Listener at localhost/40193] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5a7753c6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/hadoop.log.dir/,AVAILABLE} 2023-07-21 18:14:54,203 INFO [Listener at localhost/40193] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:54,204 INFO [Listener at localhost/40193] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@778bcb86{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 18:14:54,321 INFO [Listener at localhost/40193] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 18:14:54,321 INFO [Listener at localhost/40193] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 18:14:54,322 INFO [Listener at localhost/40193] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 18:14:54,322 INFO [Listener at localhost/40193] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 18:14:54,323 INFO [Listener at localhost/40193] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:54,324 INFO [Listener at localhost/40193] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7ee0305{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/java.io.tmpdir/jetty-0_0_0_0-34737-hbase-server-2_4_18-SNAPSHOT_jar-_-any-411773289806254435/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 18:14:54,325 INFO [Listener at localhost/40193] server.AbstractConnector(333): Started ServerConnector@9edd795{HTTP/1.1, (http/1.1)}{0.0.0.0:34737} 2023-07-21 18:14:54,325 INFO [Listener at localhost/40193] server.Server(415): Started @38459ms 2023-07-21 18:14:54,331 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 18:14:54,338 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@27dff1cb{HTTP/1.1, (http/1.1)}{0.0.0.0:42677} 2023-07-21 18:14:54,338 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @38472ms 2023-07-21 18:14:54,338 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,46525,1689963293524 2023-07-21 18:14:54,341 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 18:14:54,341 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,46525,1689963293524 2023-07-21 18:14:54,343 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:34289-0x1018917ee3e0002, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 18:14:54,343 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:44873-0x1018917ee3e0001, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 18:14:54,343 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 18:14:54,343 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:43427-0x1018917ee3e0003, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 18:14:54,344 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:14:54,347 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 18:14:54,348 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,46525,1689963293524 from backup master directory 2023-07-21 18:14:54,348 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 18:14:54,350 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,46525,1689963293524 2023-07-21 18:14:54,350 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 18:14:54,350 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 18:14:54,350 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,46525,1689963293524 2023-07-21 18:14:54,374 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/hbase.id with ID: f67da582-2842-4d99-a9e7-6b2d86cce74c 2023-07-21 18:14:54,386 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:54,389 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:14:54,400 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x642a6a7d to 127.0.0.1:51543 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 18:14:54,406 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@17994d72, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 18:14:54,406 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 18:14:54,407 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-21 18:14:54,407 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 18:14:54,409 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/MasterData/data/master/store-tmp 2023-07-21 18:14:54,420 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:54,421 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 18:14:54,421 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 18:14:54,421 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 18:14:54,421 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 18:14:54,421 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 18:14:54,421 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 18:14:54,421 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 18:14:54,421 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/MasterData/WALs/jenkins-hbase4.apache.org,46525,1689963293524 2023-07-21 18:14:54,424 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46525%2C1689963293524, suffix=, logDir=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/MasterData/WALs/jenkins-hbase4.apache.org,46525,1689963293524, archiveDir=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/MasterData/oldWALs, maxLogs=10 2023-07-21 18:14:54,444 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43487,DS-0df0ca97-db2e-433f-a2fa-5b413f2bbef9,DISK] 2023-07-21 18:14:54,447 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36007,DS-e9f00bc9-1322-455d-b177-aca5c3d88506,DISK] 2023-07-21 18:14:54,447 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34745,DS-a864ba33-5f7c-4637-a6ce-37fc55747a23,DISK] 2023-07-21 18:14:54,451 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/MasterData/WALs/jenkins-hbase4.apache.org,46525,1689963293524/jenkins-hbase4.apache.org%2C46525%2C1689963293524.1689963294424 2023-07-21 18:14:54,451 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43487,DS-0df0ca97-db2e-433f-a2fa-5b413f2bbef9,DISK], DatanodeInfoWithStorage[127.0.0.1:34745,DS-a864ba33-5f7c-4637-a6ce-37fc55747a23,DISK], DatanodeInfoWithStorage[127.0.0.1:36007,DS-e9f00bc9-1322-455d-b177-aca5c3d88506,DISK]] 2023-07-21 18:14:54,451 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:14:54,451 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:54,451 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 18:14:54,451 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 18:14:54,454 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-21 18:14:54,456 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-21 18:14:54,456 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-21 18:14:54,457 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:54,458 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 18:14:54,459 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 18:14:54,461 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 18:14:54,466 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:54,467 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10483193920, jitterRate=-0.02367648482322693}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:54,467 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 18:14:54,482 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-21 18:14:54,485 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-21 18:14:54,485 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-21 18:14:54,485 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-21 18:14:54,485 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-21 18:14:54,486 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-21 18:14:54,486 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-21 18:14:54,487 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-21 18:14:54,489 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-21 18:14:54,490 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-21 18:14:54,490 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-21 18:14:54,490 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-21 18:14:54,496 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:14:54,497 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-21 18:14:54,497 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-21 18:14:54,498 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-21 18:14:54,500 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:44873-0x1018917ee3e0001, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 18:14:54,500 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:43427-0x1018917ee3e0003, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 18:14:54,500 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:34289-0x1018917ee3e0002, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 18:14:54,500 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 18:14:54,500 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:14:54,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,46525,1689963293524, sessionid=0x1018917ee3e0000, setting cluster-up flag (Was=false) 2023-07-21 18:14:54,506 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:14:54,512 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-21 18:14:54,513 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,46525,1689963293524 2023-07-21 18:14:54,517 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:14:54,521 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-21 18:14:54,523 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,46525,1689963293524 2023-07-21 18:14:54,524 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/.hbase-snapshot/.tmp 2023-07-21 18:14:54,526 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-21 18:14:54,526 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-21 18:14:54,528 INFO [RS:0;jenkins-hbase4:44873] regionserver.HRegionServer(951): ClusterId : f67da582-2842-4d99-a9e7-6b2d86cce74c 2023-07-21 18:14:54,530 DEBUG [RS:0;jenkins-hbase4:44873] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 18:14:54,530 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-21 18:14:54,536 INFO [RS:1;jenkins-hbase4:34289] regionserver.HRegionServer(951): ClusterId : f67da582-2842-4d99-a9e7-6b2d86cce74c 2023-07-21 18:14:54,531 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46525,1689963293524] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 18:14:54,536 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-21 18:14:54,537 DEBUG [RS:1;jenkins-hbase4:34289] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 18:14:54,538 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-21 18:14:54,539 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-21 18:14:54,540 DEBUG [RS:0;jenkins-hbase4:44873] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 18:14:54,540 DEBUG [RS:0;jenkins-hbase4:44873] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 18:14:54,542 INFO [RS:2;jenkins-hbase4:43427] regionserver.HRegionServer(951): ClusterId : f67da582-2842-4d99-a9e7-6b2d86cce74c 2023-07-21 18:14:54,543 DEBUG [RS:1;jenkins-hbase4:34289] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 18:14:54,543 DEBUG [RS:1;jenkins-hbase4:34289] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 18:14:54,543 DEBUG [RS:0;jenkins-hbase4:44873] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 18:14:54,545 DEBUG [RS:1;jenkins-hbase4:34289] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 18:14:54,553 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 18:14:54,553 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 18:14:54,553 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 18:14:54,553 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 18:14:54,554 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 18:14:54,554 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 18:14:54,554 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 18:14:54,554 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 18:14:54,554 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-21 18:14:54,554 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,554 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 18:14:54,554 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,555 DEBUG [RS:2;jenkins-hbase4:43427] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 18:14:54,557 DEBUG [RS:1;jenkins-hbase4:34289] zookeeper.ReadOnlyZKClient(139): Connect 0x47b9e706 to 127.0.0.1:51543 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 18:14:54,557 DEBUG [RS:0;jenkins-hbase4:44873] zookeeper.ReadOnlyZKClient(139): Connect 0x723d6c07 to 127.0.0.1:51543 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 18:14:54,558 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689963324558 2023-07-21 18:14:54,560 DEBUG [RS:2;jenkins-hbase4:43427] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 18:14:54,560 DEBUG [RS:2;jenkins-hbase4:43427] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 18:14:54,561 DEBUG [RS:2;jenkins-hbase4:43427] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 18:14:54,563 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-21 18:14:54,563 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-21 18:14:54,563 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-21 18:14:54,563 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-21 18:14:54,563 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-21 18:14:54,563 DEBUG [RS:2;jenkins-hbase4:43427] zookeeper.ReadOnlyZKClient(139): Connect 0x317ee0ad to 127.0.0.1:51543 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 18:14:54,563 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-21 18:14:54,566 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 18:14:54,566 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-21 18:14:54,566 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,567 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-21 18:14:54,567 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-21 18:14:54,567 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-21 18:14:54,572 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-21 18:14:54,572 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-21 18:14:54,572 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 18:14:54,572 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689963294572,5,FailOnTimeoutGroup] 2023-07-21 18:14:54,572 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689963294572,5,FailOnTimeoutGroup] 2023-07-21 18:14:54,572 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,573 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-21 18:14:54,573 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,573 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,580 DEBUG [RS:0;jenkins-hbase4:44873] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7cfa9c30, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 18:14:54,580 DEBUG [RS:1;jenkins-hbase4:34289] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@497e1181, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 18:14:54,580 DEBUG [RS:2;jenkins-hbase4:43427] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3b868bdd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 18:14:54,580 DEBUG [RS:1;jenkins-hbase4:34289] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@8431bba, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 18:14:54,580 DEBUG [RS:0;jenkins-hbase4:44873] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3dc7960b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 18:14:54,581 DEBUG [RS:2;jenkins-hbase4:43427] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7e8412c9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 18:14:54,591 DEBUG [RS:1;jenkins-hbase4:34289] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:34289 2023-07-21 18:14:54,591 INFO [RS:1;jenkins-hbase4:34289] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 18:14:54,591 INFO [RS:1;jenkins-hbase4:34289] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 18:14:54,591 DEBUG [RS:1;jenkins-hbase4:34289] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 18:14:54,592 INFO [RS:1;jenkins-hbase4:34289] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46525,1689963293524 with isa=jenkins-hbase4.apache.org/172.31.14.131:34289, startcode=1689963294016 2023-07-21 18:14:54,592 DEBUG [RS:1;jenkins-hbase4:34289] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 18:14:54,593 DEBUG [RS:0;jenkins-hbase4:44873] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:44873 2023-07-21 18:14:54,593 DEBUG [RS:2;jenkins-hbase4:43427] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:43427 2023-07-21 18:14:54,593 INFO [RS:0;jenkins-hbase4:44873] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 18:14:54,593 INFO [RS:0;jenkins-hbase4:44873] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 18:14:54,593 INFO [RS:2;jenkins-hbase4:43427] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 18:14:54,593 INFO [RS:2;jenkins-hbase4:43427] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 18:14:54,593 DEBUG [RS:0;jenkins-hbase4:44873] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 18:14:54,593 DEBUG [RS:2;jenkins-hbase4:43427] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 18:14:54,593 INFO [RS:0;jenkins-hbase4:44873] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46525,1689963293524 with isa=jenkins-hbase4.apache.org/172.31.14.131:44873, startcode=1689963293765 2023-07-21 18:14:54,593 INFO [RS:2;jenkins-hbase4:43427] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46525,1689963293524 with isa=jenkins-hbase4.apache.org/172.31.14.131:43427, startcode=1689963294179 2023-07-21 18:14:54,594 DEBUG [RS:0;jenkins-hbase4:44873] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 18:14:54,594 DEBUG [RS:2;jenkins-hbase4:43427] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 18:14:54,594 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59133, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 18:14:54,596 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46525] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34289,1689963294016 2023-07-21 18:14:54,596 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46525,1689963293524] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 18:14:54,596 DEBUG [RS:1;jenkins-hbase4:34289] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613 2023-07-21 18:14:54,597 DEBUG [RS:1;jenkins-hbase4:34289] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42925 2023-07-21 18:14:54,597 DEBUG [RS:1;jenkins-hbase4:34289] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35355 2023-07-21 18:14:54,598 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:14:54,598 DEBUG [RS:1;jenkins-hbase4:34289] zookeeper.ZKUtil(162): regionserver:34289-0x1018917ee3e0002, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34289,1689963294016 2023-07-21 18:14:54,598 WARN [RS:1;jenkins-hbase4:34289] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 18:14:54,598 INFO [RS:1;jenkins-hbase4:34289] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 18:14:54,598 DEBUG [RS:1;jenkins-hbase4:34289] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/WALs/jenkins-hbase4.apache.org,34289,1689963294016 2023-07-21 18:14:54,603 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46525,1689963293524] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-21 18:14:54,604 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49799, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 18:14:54,604 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34289,1689963294016] 2023-07-21 18:14:54,604 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40573, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 18:14:54,605 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46525] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43427,1689963294179 2023-07-21 18:14:54,605 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46525,1689963293524] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 18:14:54,605 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46525,1689963293524] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-21 18:14:54,605 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46525] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44873,1689963293765 2023-07-21 18:14:54,605 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46525,1689963293524] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 18:14:54,606 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46525,1689963293524] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-21 18:14:54,606 DEBUG [RS:2;jenkins-hbase4:43427] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613 2023-07-21 18:14:54,606 DEBUG [RS:0;jenkins-hbase4:44873] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613 2023-07-21 18:14:54,606 DEBUG [RS:0;jenkins-hbase4:44873] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42925 2023-07-21 18:14:54,606 DEBUG [RS:2;jenkins-hbase4:43427] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42925 2023-07-21 18:14:54,606 DEBUG [RS:0;jenkins-hbase4:44873] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35355 2023-07-21 18:14:54,606 DEBUG [RS:2;jenkins-hbase4:43427] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35355 2023-07-21 18:14:54,611 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:14:54,612 DEBUG [RS:2;jenkins-hbase4:43427] zookeeper.ZKUtil(162): regionserver:43427-0x1018917ee3e0003, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43427,1689963294179 2023-07-21 18:14:54,612 DEBUG [RS:0;jenkins-hbase4:44873] zookeeper.ZKUtil(162): regionserver:44873-0x1018917ee3e0001, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44873,1689963293765 2023-07-21 18:14:54,612 WARN [RS:2;jenkins-hbase4:43427] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 18:14:54,612 WARN [RS:0;jenkins-hbase4:44873] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 18:14:54,612 INFO [RS:2;jenkins-hbase4:43427] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 18:14:54,612 INFO [RS:0;jenkins-hbase4:44873] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 18:14:54,612 DEBUG [RS:1;jenkins-hbase4:34289] zookeeper.ZKUtil(162): regionserver:34289-0x1018917ee3e0002, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34289,1689963294016 2023-07-21 18:14:54,612 DEBUG [RS:0;jenkins-hbase4:44873] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/WALs/jenkins-hbase4.apache.org,44873,1689963293765 2023-07-21 18:14:54,612 DEBUG [RS:2;jenkins-hbase4:43427] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/WALs/jenkins-hbase4.apache.org,43427,1689963294179 2023-07-21 18:14:54,612 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44873,1689963293765] 2023-07-21 18:14:54,612 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43427,1689963294179] 2023-07-21 18:14:54,612 DEBUG [RS:1;jenkins-hbase4:34289] zookeeper.ZKUtil(162): regionserver:34289-0x1018917ee3e0002, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44873,1689963293765 2023-07-21 18:14:54,619 DEBUG [RS:1;jenkins-hbase4:34289] zookeeper.ZKUtil(162): regionserver:34289-0x1018917ee3e0002, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43427,1689963294179 2023-07-21 18:14:54,622 DEBUG [RS:1;jenkins-hbase4:34289] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 18:14:54,622 INFO [RS:1;jenkins-hbase4:34289] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 18:14:54,624 INFO [RS:1;jenkins-hbase4:34289] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 18:14:54,624 INFO [RS:1;jenkins-hbase4:34289] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 18:14:54,624 INFO [RS:1;jenkins-hbase4:34289] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,624 INFO [RS:1;jenkins-hbase4:34289] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 18:14:54,626 DEBUG [RS:0;jenkins-hbase4:44873] zookeeper.ZKUtil(162): regionserver:44873-0x1018917ee3e0001, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34289,1689963294016 2023-07-21 18:14:54,626 INFO [RS:1;jenkins-hbase4:34289] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,626 DEBUG [RS:1;jenkins-hbase4:34289] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,626 DEBUG [RS:0;jenkins-hbase4:44873] zookeeper.ZKUtil(162): regionserver:44873-0x1018917ee3e0001, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44873,1689963293765 2023-07-21 18:14:54,627 DEBUG [RS:2;jenkins-hbase4:43427] zookeeper.ZKUtil(162): regionserver:43427-0x1018917ee3e0003, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34289,1689963294016 2023-07-21 18:14:54,626 DEBUG [RS:1;jenkins-hbase4:34289] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,627 DEBUG [RS:1;jenkins-hbase4:34289] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,627 DEBUG [RS:1;jenkins-hbase4:34289] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,627 DEBUG [RS:1;jenkins-hbase4:34289] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,627 DEBUG [RS:2;jenkins-hbase4:43427] zookeeper.ZKUtil(162): regionserver:43427-0x1018917ee3e0003, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44873,1689963293765 2023-07-21 18:14:54,627 DEBUG [RS:0;jenkins-hbase4:44873] zookeeper.ZKUtil(162): regionserver:44873-0x1018917ee3e0001, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43427,1689963294179 2023-07-21 18:14:54,627 DEBUG [RS:1;jenkins-hbase4:34289] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 18:14:54,627 DEBUG [RS:1;jenkins-hbase4:34289] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,627 DEBUG [RS:1;jenkins-hbase4:34289] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,628 DEBUG [RS:1;jenkins-hbase4:34289] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,628 DEBUG [RS:1;jenkins-hbase4:34289] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,628 DEBUG [RS:2;jenkins-hbase4:43427] zookeeper.ZKUtil(162): regionserver:43427-0x1018917ee3e0003, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43427,1689963294179 2023-07-21 18:14:54,628 DEBUG [RS:0;jenkins-hbase4:44873] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 18:14:54,629 INFO [RS:1;jenkins-hbase4:34289] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,629 INFO [RS:0;jenkins-hbase4:44873] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 18:14:54,629 INFO [RS:1;jenkins-hbase4:34289] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,629 INFO [RS:1;jenkins-hbase4:34289] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,629 INFO [RS:1;jenkins-hbase4:34289] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,629 DEBUG [RS:2;jenkins-hbase4:43427] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 18:14:54,631 INFO [RS:2;jenkins-hbase4:43427] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 18:14:54,633 INFO [RS:0;jenkins-hbase4:44873] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 18:14:54,633 INFO [RS:2;jenkins-hbase4:43427] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 18:14:54,633 INFO [RS:0;jenkins-hbase4:44873] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 18:14:54,633 INFO [RS:2;jenkins-hbase4:43427] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 18:14:54,633 INFO [RS:0;jenkins-hbase4:44873] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,633 INFO [RS:2;jenkins-hbase4:43427] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,633 INFO [RS:0;jenkins-hbase4:44873] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 18:14:54,634 INFO [RS:2;jenkins-hbase4:43427] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 18:14:54,635 INFO [RS:2;jenkins-hbase4:43427] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,635 INFO [RS:0;jenkins-hbase4:44873] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,635 DEBUG [RS:2;jenkins-hbase4:43427] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,636 DEBUG [RS:0;jenkins-hbase4:44873] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,636 DEBUG [RS:2;jenkins-hbase4:43427] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,636 DEBUG [RS:0;jenkins-hbase4:44873] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,636 DEBUG [RS:2;jenkins-hbase4:43427] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,636 DEBUG [RS:0;jenkins-hbase4:44873] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,636 DEBUG [RS:2;jenkins-hbase4:43427] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,636 DEBUG [RS:0;jenkins-hbase4:44873] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,636 DEBUG [RS:2;jenkins-hbase4:43427] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,636 DEBUG [RS:0;jenkins-hbase4:44873] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,636 DEBUG [RS:2;jenkins-hbase4:43427] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 18:14:54,636 DEBUG [RS:0;jenkins-hbase4:44873] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 18:14:54,636 DEBUG [RS:2;jenkins-hbase4:43427] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,636 DEBUG [RS:0;jenkins-hbase4:44873] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,637 DEBUG [RS:2;jenkins-hbase4:43427] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,637 DEBUG [RS:0;jenkins-hbase4:44873] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,637 DEBUG [RS:2;jenkins-hbase4:43427] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,637 DEBUG [RS:0;jenkins-hbase4:44873] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,637 DEBUG [RS:2;jenkins-hbase4:43427] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,637 DEBUG [RS:0;jenkins-hbase4:44873] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:14:54,640 INFO [RS:2;jenkins-hbase4:43427] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,640 INFO [RS:2;jenkins-hbase4:43427] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,640 INFO [RS:2;jenkins-hbase4:43427] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,640 INFO [RS:2;jenkins-hbase4:43427] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,640 INFO [RS:0;jenkins-hbase4:44873] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,640 INFO [RS:0;jenkins-hbase4:44873] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,640 INFO [RS:0;jenkins-hbase4:44873] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,640 INFO [RS:0;jenkins-hbase4:44873] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,646 INFO [RS:1;jenkins-hbase4:34289] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 18:14:54,646 INFO [RS:1;jenkins-hbase4:34289] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34289,1689963294016-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,655 INFO [RS:0;jenkins-hbase4:44873] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 18:14:54,655 INFO [RS:0;jenkins-hbase4:44873] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44873,1689963293765-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,655 INFO [RS:2;jenkins-hbase4:43427] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 18:14:54,655 INFO [RS:2;jenkins-hbase4:43427] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43427,1689963294179-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,659 INFO [RS:1;jenkins-hbase4:34289] regionserver.Replication(203): jenkins-hbase4.apache.org,34289,1689963294016 started 2023-07-21 18:14:54,659 INFO [RS:1;jenkins-hbase4:34289] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34289,1689963294016, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34289, sessionid=0x1018917ee3e0002 2023-07-21 18:14:54,659 DEBUG [RS:1;jenkins-hbase4:34289] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 18:14:54,659 DEBUG [RS:1;jenkins-hbase4:34289] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34289,1689963294016 2023-07-21 18:14:54,659 DEBUG [RS:1;jenkins-hbase4:34289] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34289,1689963294016' 2023-07-21 18:14:54,659 DEBUG [RS:1;jenkins-hbase4:34289] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 18:14:54,660 DEBUG [RS:1;jenkins-hbase4:34289] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 18:14:54,660 DEBUG [RS:1;jenkins-hbase4:34289] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 18:14:54,660 DEBUG [RS:1;jenkins-hbase4:34289] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 18:14:54,660 DEBUG [RS:1;jenkins-hbase4:34289] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34289,1689963294016 2023-07-21 18:14:54,660 DEBUG [RS:1;jenkins-hbase4:34289] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34289,1689963294016' 2023-07-21 18:14:54,660 DEBUG [RS:1;jenkins-hbase4:34289] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 18:14:54,661 DEBUG [RS:1;jenkins-hbase4:34289] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 18:14:54,661 DEBUG [RS:1;jenkins-hbase4:34289] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 18:14:54,661 INFO [RS:1;jenkins-hbase4:34289] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-21 18:14:54,664 INFO [RS:1;jenkins-hbase4:34289] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,665 DEBUG [RS:1;jenkins-hbase4:34289] zookeeper.ZKUtil(398): regionserver:34289-0x1018917ee3e0002, quorum=127.0.0.1:51543, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-21 18:14:54,665 INFO [RS:1;jenkins-hbase4:34289] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-21 18:14:54,665 INFO [RS:1;jenkins-hbase4:34289] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,666 INFO [RS:1;jenkins-hbase4:34289] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,675 INFO [RS:0;jenkins-hbase4:44873] regionserver.Replication(203): jenkins-hbase4.apache.org,44873,1689963293765 started 2023-07-21 18:14:54,676 INFO [RS:0;jenkins-hbase4:44873] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44873,1689963293765, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44873, sessionid=0x1018917ee3e0001 2023-07-21 18:14:54,676 INFO [RS:2;jenkins-hbase4:43427] regionserver.Replication(203): jenkins-hbase4.apache.org,43427,1689963294179 started 2023-07-21 18:14:54,676 DEBUG [RS:0;jenkins-hbase4:44873] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 18:14:54,676 DEBUG [RS:0;jenkins-hbase4:44873] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44873,1689963293765 2023-07-21 18:14:54,676 INFO [RS:2;jenkins-hbase4:43427] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43427,1689963294179, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43427, sessionid=0x1018917ee3e0003 2023-07-21 18:14:54,676 DEBUG [RS:0;jenkins-hbase4:44873] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44873,1689963293765' 2023-07-21 18:14:54,676 DEBUG [RS:0;jenkins-hbase4:44873] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 18:14:54,676 DEBUG [RS:2;jenkins-hbase4:43427] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 18:14:54,676 DEBUG [RS:2;jenkins-hbase4:43427] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43427,1689963294179 2023-07-21 18:14:54,676 DEBUG [RS:2;jenkins-hbase4:43427] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43427,1689963294179' 2023-07-21 18:14:54,676 DEBUG [RS:2;jenkins-hbase4:43427] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 18:14:54,676 DEBUG [RS:0;jenkins-hbase4:44873] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 18:14:54,676 DEBUG [RS:2;jenkins-hbase4:43427] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 18:14:54,677 DEBUG [RS:0;jenkins-hbase4:44873] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 18:14:54,677 DEBUG [RS:0;jenkins-hbase4:44873] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 18:14:54,677 DEBUG [RS:0;jenkins-hbase4:44873] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44873,1689963293765 2023-07-21 18:14:54,677 DEBUG [RS:0;jenkins-hbase4:44873] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44873,1689963293765' 2023-07-21 18:14:54,677 DEBUG [RS:0;jenkins-hbase4:44873] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 18:14:54,677 DEBUG [RS:2;jenkins-hbase4:43427] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 18:14:54,677 DEBUG [RS:2;jenkins-hbase4:43427] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 18:14:54,677 DEBUG [RS:2;jenkins-hbase4:43427] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43427,1689963294179 2023-07-21 18:14:54,677 DEBUG [RS:2;jenkins-hbase4:43427] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43427,1689963294179' 2023-07-21 18:14:54,677 DEBUG [RS:2;jenkins-hbase4:43427] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 18:14:54,677 DEBUG [RS:0;jenkins-hbase4:44873] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 18:14:54,677 DEBUG [RS:2;jenkins-hbase4:43427] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 18:14:54,677 DEBUG [RS:0;jenkins-hbase4:44873] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 18:14:54,677 INFO [RS:0;jenkins-hbase4:44873] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-21 18:14:54,677 INFO [RS:0;jenkins-hbase4:44873] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,678 DEBUG [RS:0;jenkins-hbase4:44873] zookeeper.ZKUtil(398): regionserver:44873-0x1018917ee3e0001, quorum=127.0.0.1:51543, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-21 18:14:54,678 INFO [RS:0;jenkins-hbase4:44873] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-21 18:14:54,678 INFO [RS:0;jenkins-hbase4:44873] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,678 INFO [RS:0;jenkins-hbase4:44873] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,678 DEBUG [RS:2;jenkins-hbase4:43427] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 18:14:54,678 INFO [RS:2;jenkins-hbase4:43427] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-21 18:14:54,678 INFO [RS:2;jenkins-hbase4:43427] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,679 DEBUG [RS:2;jenkins-hbase4:43427] zookeeper.ZKUtil(398): regionserver:43427-0x1018917ee3e0003, quorum=127.0.0.1:51543, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-21 18:14:54,679 INFO [RS:2;jenkins-hbase4:43427] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-21 18:14:54,679 INFO [RS:2;jenkins-hbase4:43427] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,679 INFO [RS:2;jenkins-hbase4:43427] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:54,769 INFO [RS:1;jenkins-hbase4:34289] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34289%2C1689963294016, suffix=, logDir=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/WALs/jenkins-hbase4.apache.org,34289,1689963294016, archiveDir=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/oldWALs, maxLogs=32 2023-07-21 18:14:54,780 INFO [RS:0;jenkins-hbase4:44873] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44873%2C1689963293765, suffix=, logDir=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/WALs/jenkins-hbase4.apache.org,44873,1689963293765, archiveDir=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/oldWALs, maxLogs=32 2023-07-21 18:14:54,781 INFO [RS:2;jenkins-hbase4:43427] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43427%2C1689963294179, suffix=, logDir=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/WALs/jenkins-hbase4.apache.org,43427,1689963294179, archiveDir=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/oldWALs, maxLogs=32 2023-07-21 18:14:54,800 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43487,DS-0df0ca97-db2e-433f-a2fa-5b413f2bbef9,DISK] 2023-07-21 18:14:54,801 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34745,DS-a864ba33-5f7c-4637-a6ce-37fc55747a23,DISK] 2023-07-21 18:14:54,803 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36007,DS-e9f00bc9-1322-455d-b177-aca5c3d88506,DISK] 2023-07-21 18:14:54,831 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36007,DS-e9f00bc9-1322-455d-b177-aca5c3d88506,DISK] 2023-07-21 18:14:54,831 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34745,DS-a864ba33-5f7c-4637-a6ce-37fc55747a23,DISK] 2023-07-21 18:14:54,831 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43487,DS-0df0ca97-db2e-433f-a2fa-5b413f2bbef9,DISK] 2023-07-21 18:14:54,832 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36007,DS-e9f00bc9-1322-455d-b177-aca5c3d88506,DISK] 2023-07-21 18:14:54,832 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43487,DS-0df0ca97-db2e-433f-a2fa-5b413f2bbef9,DISK] 2023-07-21 18:14:54,833 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34745,DS-a864ba33-5f7c-4637-a6ce-37fc55747a23,DISK] 2023-07-21 18:14:54,833 INFO [RS:1;jenkins-hbase4:34289] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/WALs/jenkins-hbase4.apache.org,34289,1689963294016/jenkins-hbase4.apache.org%2C34289%2C1689963294016.1689963294772 2023-07-21 18:14:54,841 INFO [RS:0;jenkins-hbase4:44873] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/WALs/jenkins-hbase4.apache.org,44873,1689963293765/jenkins-hbase4.apache.org%2C44873%2C1689963293765.1689963294788 2023-07-21 18:14:54,842 DEBUG [RS:1;jenkins-hbase4:34289] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43487,DS-0df0ca97-db2e-433f-a2fa-5b413f2bbef9,DISK], DatanodeInfoWithStorage[127.0.0.1:34745,DS-a864ba33-5f7c-4637-a6ce-37fc55747a23,DISK], DatanodeInfoWithStorage[127.0.0.1:36007,DS-e9f00bc9-1322-455d-b177-aca5c3d88506,DISK]] 2023-07-21 18:14:54,842 INFO [RS:2;jenkins-hbase4:43427] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/WALs/jenkins-hbase4.apache.org,43427,1689963294179/jenkins-hbase4.apache.org%2C43427%2C1689963294179.1689963294788 2023-07-21 18:14:54,842 DEBUG [RS:0;jenkins-hbase4:44873] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36007,DS-e9f00bc9-1322-455d-b177-aca5c3d88506,DISK], DatanodeInfoWithStorage[127.0.0.1:43487,DS-0df0ca97-db2e-433f-a2fa-5b413f2bbef9,DISK], DatanodeInfoWithStorage[127.0.0.1:34745,DS-a864ba33-5f7c-4637-a6ce-37fc55747a23,DISK]] 2023-07-21 18:14:54,846 DEBUG [RS:2;jenkins-hbase4:43427] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36007,DS-e9f00bc9-1322-455d-b177-aca5c3d88506,DISK], DatanodeInfoWithStorage[127.0.0.1:34745,DS-a864ba33-5f7c-4637-a6ce-37fc55747a23,DISK], DatanodeInfoWithStorage[127.0.0.1:43487,DS-0df0ca97-db2e-433f-a2fa-5b413f2bbef9,DISK]] 2023-07-21 18:14:54,988 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 18:14:54,989 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 18:14:54,989 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613 2023-07-21 18:14:54,998 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:54,999 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 18:14:55,001 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740/info 2023-07-21 18:14:55,001 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 18:14:55,002 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:55,002 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 18:14:55,003 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740/rep_barrier 2023-07-21 18:14:55,003 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 18:14:55,004 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:55,004 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 18:14:55,005 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740/table 2023-07-21 18:14:55,005 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 18:14:55,006 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:55,006 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740 2023-07-21 18:14:55,007 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740 2023-07-21 18:14:55,008 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 18:14:55,010 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 18:14:55,012 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:55,012 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11028242240, jitterRate=0.027085095643997192}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 18:14:55,012 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 18:14:55,012 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 18:14:55,012 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 18:14:55,012 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 18:14:55,012 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 18:14:55,012 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 18:14:55,012 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 18:14:55,013 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 18:14:55,013 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 18:14:55,013 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-21 18:14:55,014 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-21 18:14:55,016 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-21 18:14:55,018 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-21 18:14:55,168 DEBUG [jenkins-hbase4:46525] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 18:14:55,168 DEBUG [jenkins-hbase4:46525] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 18:14:55,168 DEBUG [jenkins-hbase4:46525] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 18:14:55,169 DEBUG [jenkins-hbase4:46525] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 18:14:55,169 DEBUG [jenkins-hbase4:46525] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 18:14:55,169 DEBUG [jenkins-hbase4:46525] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 18:14:55,170 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34289,1689963294016, state=OPENING 2023-07-21 18:14:55,171 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-21 18:14:55,174 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:14:55,177 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34289,1689963294016}] 2023-07-21 18:14:55,177 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 18:14:55,331 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34289,1689963294016 2023-07-21 18:14:55,331 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 18:14:55,333 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53080, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 18:14:55,338 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 18:14:55,339 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 18:14:55,340 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34289%2C1689963294016.meta, suffix=.meta, logDir=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/WALs/jenkins-hbase4.apache.org,34289,1689963294016, archiveDir=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/oldWALs, maxLogs=32 2023-07-21 18:14:55,359 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34745,DS-a864ba33-5f7c-4637-a6ce-37fc55747a23,DISK] 2023-07-21 18:14:55,360 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43487,DS-0df0ca97-db2e-433f-a2fa-5b413f2bbef9,DISK] 2023-07-21 18:14:55,360 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36007,DS-e9f00bc9-1322-455d-b177-aca5c3d88506,DISK] 2023-07-21 18:14:55,362 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/WALs/jenkins-hbase4.apache.org,34289,1689963294016/jenkins-hbase4.apache.org%2C34289%2C1689963294016.meta.1689963295341.meta 2023-07-21 18:14:55,362 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34745,DS-a864ba33-5f7c-4637-a6ce-37fc55747a23,DISK], DatanodeInfoWithStorage[127.0.0.1:36007,DS-e9f00bc9-1322-455d-b177-aca5c3d88506,DISK], DatanodeInfoWithStorage[127.0.0.1:43487,DS-0df0ca97-db2e-433f-a2fa-5b413f2bbef9,DISK]] 2023-07-21 18:14:55,362 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:14:55,362 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 18:14:55,362 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 18:14:55,363 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 18:14:55,363 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 18:14:55,363 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:55,363 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 18:14:55,363 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 18:14:55,364 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 18:14:55,365 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740/info 2023-07-21 18:14:55,365 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740/info 2023-07-21 18:14:55,366 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 18:14:55,366 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:55,366 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 18:14:55,367 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740/rep_barrier 2023-07-21 18:14:55,368 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740/rep_barrier 2023-07-21 18:14:55,368 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 18:14:55,368 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:55,368 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 18:14:55,369 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740/table 2023-07-21 18:14:55,369 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740/table 2023-07-21 18:14:55,370 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 18:14:55,370 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:55,371 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740 2023-07-21 18:14:55,372 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740 2023-07-21 18:14:55,374 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 18:14:55,375 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 18:14:55,376 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11326722560, jitterRate=0.05488324165344238}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 18:14:55,376 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 18:14:55,377 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689963295331 2023-07-21 18:14:55,384 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 18:14:55,385 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 18:14:55,385 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34289,1689963294016, state=OPEN 2023-07-21 18:14:55,386 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 18:14:55,386 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 18:14:55,388 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-21 18:14:55,389 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34289,1689963294016 in 213 msec 2023-07-21 18:14:55,390 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-21 18:14:55,390 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 375 msec 2023-07-21 18:14:55,392 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 853 msec 2023-07-21 18:14:55,392 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689963295392, completionTime=-1 2023-07-21 18:14:55,392 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-21 18:14:55,392 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-21 18:14:55,395 DEBUG [hconnection-0x3f7751bf-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 18:14:55,397 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53084, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 18:14:55,399 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-21 18:14:55,399 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689963355399 2023-07-21 18:14:55,399 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689963415399 2023-07-21 18:14:55,399 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 7 msec 2023-07-21 18:14:55,409 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46525,1689963293524-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:55,409 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46525,1689963293524-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:55,409 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46525,1689963293524-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:55,409 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:46525, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:55,410 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:55,410 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-21 18:14:55,410 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 18:14:55,411 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-21 18:14:55,411 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-21 18:14:55,412 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 18:14:55,413 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 18:14:55,414 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/.tmp/data/hbase/namespace/204abb6332db88cb6ab34dbd3f1a85b9 2023-07-21 18:14:55,415 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/.tmp/data/hbase/namespace/204abb6332db88cb6ab34dbd3f1a85b9 empty. 2023-07-21 18:14:55,415 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/.tmp/data/hbase/namespace/204abb6332db88cb6ab34dbd3f1a85b9 2023-07-21 18:14:55,415 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-21 18:14:55,430 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-21 18:14:55,432 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 204abb6332db88cb6ab34dbd3f1a85b9, NAME => 'hbase:namespace,,1689963295410.204abb6332db88cb6ab34dbd3f1a85b9.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/.tmp 2023-07-21 18:14:55,441 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689963295410.204abb6332db88cb6ab34dbd3f1a85b9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:55,441 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 204abb6332db88cb6ab34dbd3f1a85b9, disabling compactions & flushes 2023-07-21 18:14:55,441 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689963295410.204abb6332db88cb6ab34dbd3f1a85b9. 2023-07-21 18:14:55,441 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689963295410.204abb6332db88cb6ab34dbd3f1a85b9. 2023-07-21 18:14:55,441 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689963295410.204abb6332db88cb6ab34dbd3f1a85b9. after waiting 0 ms 2023-07-21 18:14:55,441 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689963295410.204abb6332db88cb6ab34dbd3f1a85b9. 2023-07-21 18:14:55,441 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689963295410.204abb6332db88cb6ab34dbd3f1a85b9. 2023-07-21 18:14:55,441 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 204abb6332db88cb6ab34dbd3f1a85b9: 2023-07-21 18:14:55,443 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 18:14:55,444 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689963295410.204abb6332db88cb6ab34dbd3f1a85b9.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689963295444"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963295444"}]},"ts":"1689963295444"} 2023-07-21 18:14:55,446 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 18:14:55,447 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 18:14:55,447 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963295447"}]},"ts":"1689963295447"} 2023-07-21 18:14:55,448 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-21 18:14:55,457 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 18:14:55,457 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 18:14:55,457 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 18:14:55,457 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 18:14:55,457 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 18:14:55,458 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=204abb6332db88cb6ab34dbd3f1a85b9, ASSIGN}] 2023-07-21 18:14:55,460 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=204abb6332db88cb6ab34dbd3f1a85b9, ASSIGN 2023-07-21 18:14:55,460 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=204abb6332db88cb6ab34dbd3f1a85b9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43427,1689963294179; forceNewPlan=false, retain=false 2023-07-21 18:14:55,611 INFO [jenkins-hbase4:46525] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 18:14:55,612 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=204abb6332db88cb6ab34dbd3f1a85b9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43427,1689963294179 2023-07-21 18:14:55,612 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689963295410.204abb6332db88cb6ab34dbd3f1a85b9.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689963295612"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963295612"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963295612"}]},"ts":"1689963295612"} 2023-07-21 18:14:55,614 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 204abb6332db88cb6ab34dbd3f1a85b9, server=jenkins-hbase4.apache.org,43427,1689963294179}] 2023-07-21 18:14:55,654 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46525,1689963293524] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 18:14:55,656 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46525,1689963293524] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-21 18:14:55,658 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 18:14:55,659 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 18:14:55,661 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/.tmp/data/hbase/rsgroup/2fbc63f2ceb29c1687363312a4d6900f 2023-07-21 18:14:55,661 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/.tmp/data/hbase/rsgroup/2fbc63f2ceb29c1687363312a4d6900f empty. 2023-07-21 18:14:55,662 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/.tmp/data/hbase/rsgroup/2fbc63f2ceb29c1687363312a4d6900f 2023-07-21 18:14:55,662 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-21 18:14:55,679 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-21 18:14:55,680 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2fbc63f2ceb29c1687363312a4d6900f, NAME => 'hbase:rsgroup,,1689963295654.2fbc63f2ceb29c1687363312a4d6900f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/.tmp 2023-07-21 18:14:55,690 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689963295654.2fbc63f2ceb29c1687363312a4d6900f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:55,690 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 2fbc63f2ceb29c1687363312a4d6900f, disabling compactions & flushes 2023-07-21 18:14:55,690 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689963295654.2fbc63f2ceb29c1687363312a4d6900f. 2023-07-21 18:14:55,690 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689963295654.2fbc63f2ceb29c1687363312a4d6900f. 2023-07-21 18:14:55,690 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689963295654.2fbc63f2ceb29c1687363312a4d6900f. after waiting 0 ms 2023-07-21 18:14:55,690 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689963295654.2fbc63f2ceb29c1687363312a4d6900f. 2023-07-21 18:14:55,690 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689963295654.2fbc63f2ceb29c1687363312a4d6900f. 2023-07-21 18:14:55,690 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 2fbc63f2ceb29c1687363312a4d6900f: 2023-07-21 18:14:55,692 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 18:14:55,693 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689963295654.2fbc63f2ceb29c1687363312a4d6900f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689963295693"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963295693"}]},"ts":"1689963295693"} 2023-07-21 18:14:55,695 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 18:14:55,695 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 18:14:55,695 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963295695"}]},"ts":"1689963295695"} 2023-07-21 18:14:55,696 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-21 18:14:55,700 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 18:14:55,701 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 18:14:55,701 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 18:14:55,701 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 18:14:55,701 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 18:14:55,701 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=2fbc63f2ceb29c1687363312a4d6900f, ASSIGN}] 2023-07-21 18:14:55,702 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=2fbc63f2ceb29c1687363312a4d6900f, ASSIGN 2023-07-21 18:14:55,703 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=2fbc63f2ceb29c1687363312a4d6900f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44873,1689963293765; forceNewPlan=false, retain=false 2023-07-21 18:14:55,766 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43427,1689963294179 2023-07-21 18:14:55,766 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 18:14:55,768 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58388, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 18:14:55,773 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689963295410.204abb6332db88cb6ab34dbd3f1a85b9. 2023-07-21 18:14:55,773 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 204abb6332db88cb6ab34dbd3f1a85b9, NAME => 'hbase:namespace,,1689963295410.204abb6332db88cb6ab34dbd3f1a85b9.', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:14:55,774 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 204abb6332db88cb6ab34dbd3f1a85b9 2023-07-21 18:14:55,774 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689963295410.204abb6332db88cb6ab34dbd3f1a85b9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:55,774 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 204abb6332db88cb6ab34dbd3f1a85b9 2023-07-21 18:14:55,774 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 204abb6332db88cb6ab34dbd3f1a85b9 2023-07-21 18:14:55,776 INFO [StoreOpener-204abb6332db88cb6ab34dbd3f1a85b9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 204abb6332db88cb6ab34dbd3f1a85b9 2023-07-21 18:14:55,777 DEBUG [StoreOpener-204abb6332db88cb6ab34dbd3f1a85b9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/namespace/204abb6332db88cb6ab34dbd3f1a85b9/info 2023-07-21 18:14:55,777 DEBUG [StoreOpener-204abb6332db88cb6ab34dbd3f1a85b9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/namespace/204abb6332db88cb6ab34dbd3f1a85b9/info 2023-07-21 18:14:55,777 INFO [StoreOpener-204abb6332db88cb6ab34dbd3f1a85b9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 204abb6332db88cb6ab34dbd3f1a85b9 columnFamilyName info 2023-07-21 18:14:55,778 INFO [StoreOpener-204abb6332db88cb6ab34dbd3f1a85b9-1] regionserver.HStore(310): Store=204abb6332db88cb6ab34dbd3f1a85b9/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:55,779 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/namespace/204abb6332db88cb6ab34dbd3f1a85b9 2023-07-21 18:14:55,780 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/namespace/204abb6332db88cb6ab34dbd3f1a85b9 2023-07-21 18:14:55,786 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 204abb6332db88cb6ab34dbd3f1a85b9 2023-07-21 18:14:55,794 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/namespace/204abb6332db88cb6ab34dbd3f1a85b9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:55,795 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 204abb6332db88cb6ab34dbd3f1a85b9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11097925120, jitterRate=0.033574819564819336}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:55,795 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 204abb6332db88cb6ab34dbd3f1a85b9: 2023-07-21 18:14:55,796 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689963295410.204abb6332db88cb6ab34dbd3f1a85b9., pid=6, masterSystemTime=1689963295766 2023-07-21 18:14:55,800 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689963295410.204abb6332db88cb6ab34dbd3f1a85b9. 2023-07-21 18:14:55,801 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689963295410.204abb6332db88cb6ab34dbd3f1a85b9. 2023-07-21 18:14:55,802 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=204abb6332db88cb6ab34dbd3f1a85b9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43427,1689963294179 2023-07-21 18:14:55,802 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689963295410.204abb6332db88cb6ab34dbd3f1a85b9.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689963295802"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963295802"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963295802"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963295802"}]},"ts":"1689963295802"} 2023-07-21 18:14:55,808 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-21 18:14:55,808 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 204abb6332db88cb6ab34dbd3f1a85b9, server=jenkins-hbase4.apache.org,43427,1689963294179 in 190 msec 2023-07-21 18:14:55,811 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-21 18:14:55,811 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=204abb6332db88cb6ab34dbd3f1a85b9, ASSIGN in 351 msec 2023-07-21 18:14:55,812 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 18:14:55,812 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963295812"}]},"ts":"1689963295812"} 2023-07-21 18:14:55,821 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-21 18:14:55,824 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-21 18:14:55,824 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 18:14:55,827 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-21 18:14:55,827 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:14:55,836 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 414 msec 2023-07-21 18:14:55,837 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 18:14:55,838 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58396, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 18:14:55,845 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-21 18:14:55,853 INFO [jenkins-hbase4:46525] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 18:14:55,853 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 18:14:55,854 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=2fbc63f2ceb29c1687363312a4d6900f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44873,1689963293765 2023-07-21 18:14:55,854 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689963295654.2fbc63f2ceb29c1687363312a4d6900f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689963295854"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963295854"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963295854"}]},"ts":"1689963295854"} 2023-07-21 18:14:55,856 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=8, state=RUNNABLE; OpenRegionProcedure 2fbc63f2ceb29c1687363312a4d6900f, server=jenkins-hbase4.apache.org,44873,1689963293765}] 2023-07-21 18:14:55,857 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 14 msec 2023-07-21 18:14:55,867 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 18:14:55,871 DEBUG [PEWorker-1] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-21 18:14:55,871 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 18:14:56,009 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44873,1689963293765 2023-07-21 18:14:56,009 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 18:14:56,011 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40626, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 18:14:56,015 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689963295654.2fbc63f2ceb29c1687363312a4d6900f. 2023-07-21 18:14:56,015 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2fbc63f2ceb29c1687363312a4d6900f, NAME => 'hbase:rsgroup,,1689963295654.2fbc63f2ceb29c1687363312a4d6900f.', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:14:56,016 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 18:14:56,016 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689963295654.2fbc63f2ceb29c1687363312a4d6900f. service=MultiRowMutationService 2023-07-21 18:14:56,016 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 18:14:56,016 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 2fbc63f2ceb29c1687363312a4d6900f 2023-07-21 18:14:56,016 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689963295654.2fbc63f2ceb29c1687363312a4d6900f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:56,016 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2fbc63f2ceb29c1687363312a4d6900f 2023-07-21 18:14:56,016 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2fbc63f2ceb29c1687363312a4d6900f 2023-07-21 18:14:56,017 INFO [StoreOpener-2fbc63f2ceb29c1687363312a4d6900f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 2fbc63f2ceb29c1687363312a4d6900f 2023-07-21 18:14:56,019 DEBUG [StoreOpener-2fbc63f2ceb29c1687363312a4d6900f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/rsgroup/2fbc63f2ceb29c1687363312a4d6900f/m 2023-07-21 18:14:56,019 DEBUG [StoreOpener-2fbc63f2ceb29c1687363312a4d6900f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/rsgroup/2fbc63f2ceb29c1687363312a4d6900f/m 2023-07-21 18:14:56,019 INFO [StoreOpener-2fbc63f2ceb29c1687363312a4d6900f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2fbc63f2ceb29c1687363312a4d6900f columnFamilyName m 2023-07-21 18:14:56,020 INFO [StoreOpener-2fbc63f2ceb29c1687363312a4d6900f-1] regionserver.HStore(310): Store=2fbc63f2ceb29c1687363312a4d6900f/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:56,021 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/rsgroup/2fbc63f2ceb29c1687363312a4d6900f 2023-07-21 18:14:56,021 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/rsgroup/2fbc63f2ceb29c1687363312a4d6900f 2023-07-21 18:14:56,023 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2fbc63f2ceb29c1687363312a4d6900f 2023-07-21 18:14:56,027 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/rsgroup/2fbc63f2ceb29c1687363312a4d6900f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:56,027 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2fbc63f2ceb29c1687363312a4d6900f; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@4056fb45, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:56,027 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2fbc63f2ceb29c1687363312a4d6900f: 2023-07-21 18:14:56,028 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689963295654.2fbc63f2ceb29c1687363312a4d6900f., pid=10, masterSystemTime=1689963296009 2023-07-21 18:14:56,031 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689963295654.2fbc63f2ceb29c1687363312a4d6900f. 2023-07-21 18:14:56,032 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689963295654.2fbc63f2ceb29c1687363312a4d6900f. 2023-07-21 18:14:56,032 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=2fbc63f2ceb29c1687363312a4d6900f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44873,1689963293765 2023-07-21 18:14:56,032 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689963295654.2fbc63f2ceb29c1687363312a4d6900f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689963296032"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963296032"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963296032"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963296032"}]},"ts":"1689963296032"} 2023-07-21 18:14:56,039 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=8 2023-07-21 18:14:56,039 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=8, state=SUCCESS; OpenRegionProcedure 2fbc63f2ceb29c1687363312a4d6900f, server=jenkins-hbase4.apache.org,44873,1689963293765 in 181 msec 2023-07-21 18:14:56,040 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=7 2023-07-21 18:14:56,040 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=7, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=2fbc63f2ceb29c1687363312a4d6900f, ASSIGN in 338 msec 2023-07-21 18:14:56,048 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 18:14:56,062 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 184 msec 2023-07-21 18:14:56,063 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 18:14:56,063 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963296063"}]},"ts":"1689963296063"} 2023-07-21 18:14:56,065 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-21 18:14:56,067 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 18:14:56,069 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 413 msec 2023-07-21 18:14:56,075 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 18:14:56,077 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-21 18:14:56,077 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.727sec 2023-07-21 18:14:56,077 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-21 18:14:56,077 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 18:14:56,078 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-21 18:14:56,078 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-21 18:14:56,080 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 18:14:56,081 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 18:14:56,081 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-21 18:14:56,082 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/.tmp/data/hbase/quota/a9ff6050777c3f21c6591e17bc3cac63 2023-07-21 18:14:56,083 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/.tmp/data/hbase/quota/a9ff6050777c3f21c6591e17bc3cac63 empty. 2023-07-21 18:14:56,083 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/.tmp/data/hbase/quota/a9ff6050777c3f21c6591e17bc3cac63 2023-07-21 18:14:56,084 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-21 18:14:56,087 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-21 18:14:56,087 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-21 18:14:56,089 WARN [IPC Server handler 0 on default port 42925] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-07-21 18:14:56,089 WARN [IPC Server handler 0 on default port 42925] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-07-21 18:14:56,089 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:56,089 WARN [IPC Server handler 0 on default port 42925] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-07-21 18:14:56,089 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 18:14:56,089 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-21 18:14:56,089 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-21 18:14:56,089 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46525,1689963293524-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-21 18:14:56,090 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46525,1689963293524-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-21 18:14:56,090 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-21 18:14:56,099 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-21 18:14:56,100 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => a9ff6050777c3f21c6591e17bc3cac63, NAME => 'hbase:quota,,1689963296077.a9ff6050777c3f21c6591e17bc3cac63.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/.tmp 2023-07-21 18:14:56,112 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689963296077.a9ff6050777c3f21c6591e17bc3cac63.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:56,112 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing a9ff6050777c3f21c6591e17bc3cac63, disabling compactions & flushes 2023-07-21 18:14:56,112 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689963296077.a9ff6050777c3f21c6591e17bc3cac63. 2023-07-21 18:14:56,112 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689963296077.a9ff6050777c3f21c6591e17bc3cac63. 2023-07-21 18:14:56,113 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689963296077.a9ff6050777c3f21c6591e17bc3cac63. after waiting 1 ms 2023-07-21 18:14:56,113 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689963296077.a9ff6050777c3f21c6591e17bc3cac63. 2023-07-21 18:14:56,113 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689963296077.a9ff6050777c3f21c6591e17bc3cac63. 2023-07-21 18:14:56,113 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for a9ff6050777c3f21c6591e17bc3cac63: 2023-07-21 18:14:56,115 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 18:14:56,116 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689963296077.a9ff6050777c3f21c6591e17bc3cac63.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689963296116"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963296116"}]},"ts":"1689963296116"} 2023-07-21 18:14:56,117 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 18:14:56,118 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 18:14:56,118 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963296118"}]},"ts":"1689963296118"} 2023-07-21 18:14:56,119 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-21 18:14:56,123 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 18:14:56,123 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 18:14:56,123 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 18:14:56,123 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 18:14:56,123 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 18:14:56,123 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=a9ff6050777c3f21c6591e17bc3cac63, ASSIGN}] 2023-07-21 18:14:56,124 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=a9ff6050777c3f21c6591e17bc3cac63, ASSIGN 2023-07-21 18:14:56,125 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=a9ff6050777c3f21c6591e17bc3cac63, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34289,1689963294016; forceNewPlan=false, retain=false 2023-07-21 18:14:56,133 DEBUG [Listener at localhost/40193] zookeeper.ReadOnlyZKClient(139): Connect 0x3b249f64 to 127.0.0.1:51543 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 18:14:56,138 DEBUG [Listener at localhost/40193] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@45accb6c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 18:14:56,140 DEBUG [hconnection-0x476fe2f-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 18:14:56,141 INFO [RS-EventLoopGroup-10-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53088, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 18:14:56,143 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,46525,1689963293524 2023-07-21 18:14:56,143 INFO [Listener at localhost/40193] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:14:56,159 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46525,1689963293524] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 18:14:56,161 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40632, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 18:14:56,161 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46525,1689963293524] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-21 18:14:56,161 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46525,1689963293524] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-21 18:14:56,165 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:14:56,165 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46525,1689963293524] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:56,166 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46525,1689963293524] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 18:14:56,169 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46525,1689963293524] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-21 18:14:56,246 DEBUG [Listener at localhost/40193] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-21 18:14:56,247 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52774, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-21 18:14:56,250 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-21 18:14:56,250 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:14:56,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-21 18:14:56,252 DEBUG [Listener at localhost/40193] zookeeper.ReadOnlyZKClient(139): Connect 0x4fe30c6b to 127.0.0.1:51543 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 18:14:56,256 DEBUG [Listener at localhost/40193] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7c57143e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 18:14:56,256 INFO [Listener at localhost/40193] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:51543 2023-07-21 18:14:56,260 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 18:14:56,261 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1018917ee3e000a connected 2023-07-21 18:14:56,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-21 18:14:56,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-21 18:14:56,269 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-21 18:14:56,273 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 18:14:56,275 INFO [jenkins-hbase4:46525] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 18:14:56,276 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=a9ff6050777c3f21c6591e17bc3cac63, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34289,1689963294016 2023-07-21 18:14:56,276 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689963296077.a9ff6050777c3f21c6591e17bc3cac63.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689963296276"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963296276"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963296276"}]},"ts":"1689963296276"} 2023-07-21 18:14:56,276 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 11 msec 2023-07-21 18:14:56,277 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; OpenRegionProcedure a9ff6050777c3f21c6591e17bc3cac63, server=jenkins-hbase4.apache.org,34289,1689963294016}] 2023-07-21 18:14:56,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-21 18:14:56,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 18:14:56,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-21 18:14:56,379 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 18:14:56,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 16 2023-07-21 18:14:56,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 18:14:56,381 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:56,382 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 18:14:56,385 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 18:14:56,386 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/.tmp/data/np1/table1/cc2cc14400a70c5bbc35b86e70cf0c76 2023-07-21 18:14:56,387 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/.tmp/data/np1/table1/cc2cc14400a70c5bbc35b86e70cf0c76 empty. 2023-07-21 18:14:56,387 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/.tmp/data/np1/table1/cc2cc14400a70c5bbc35b86e70cf0c76 2023-07-21 18:14:56,387 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-21 18:14:56,399 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-21 18:14:56,400 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => cc2cc14400a70c5bbc35b86e70cf0c76, NAME => 'np1:table1,,1689963296375.cc2cc14400a70c5bbc35b86e70cf0c76.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/.tmp 2023-07-21 18:14:56,414 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689963296375.cc2cc14400a70c5bbc35b86e70cf0c76.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:56,414 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing cc2cc14400a70c5bbc35b86e70cf0c76, disabling compactions & flushes 2023-07-21 18:14:56,414 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689963296375.cc2cc14400a70c5bbc35b86e70cf0c76. 2023-07-21 18:14:56,414 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689963296375.cc2cc14400a70c5bbc35b86e70cf0c76. 2023-07-21 18:14:56,414 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689963296375.cc2cc14400a70c5bbc35b86e70cf0c76. after waiting 0 ms 2023-07-21 18:14:56,414 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689963296375.cc2cc14400a70c5bbc35b86e70cf0c76. 2023-07-21 18:14:56,414 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689963296375.cc2cc14400a70c5bbc35b86e70cf0c76. 2023-07-21 18:14:56,414 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for cc2cc14400a70c5bbc35b86e70cf0c76: 2023-07-21 18:14:56,416 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 18:14:56,418 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689963296375.cc2cc14400a70c5bbc35b86e70cf0c76.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689963296418"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963296418"}]},"ts":"1689963296418"} 2023-07-21 18:14:56,419 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 18:14:56,420 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 18:14:56,420 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963296420"}]},"ts":"1689963296420"} 2023-07-21 18:14:56,424 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-21 18:14:56,429 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 18:14:56,429 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 18:14:56,429 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 18:14:56,429 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 18:14:56,429 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 18:14:56,430 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=cc2cc14400a70c5bbc35b86e70cf0c76, ASSIGN}] 2023-07-21 18:14:56,431 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=cc2cc14400a70c5bbc35b86e70cf0c76, ASSIGN 2023-07-21 18:14:56,432 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=cc2cc14400a70c5bbc35b86e70cf0c76, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43427,1689963294179; forceNewPlan=false, retain=false 2023-07-21 18:14:56,432 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689963296077.a9ff6050777c3f21c6591e17bc3cac63. 2023-07-21 18:14:56,432 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a9ff6050777c3f21c6591e17bc3cac63, NAME => 'hbase:quota,,1689963296077.a9ff6050777c3f21c6591e17bc3cac63.', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:14:56,433 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota a9ff6050777c3f21c6591e17bc3cac63 2023-07-21 18:14:56,433 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689963296077.a9ff6050777c3f21c6591e17bc3cac63.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:56,433 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a9ff6050777c3f21c6591e17bc3cac63 2023-07-21 18:14:56,433 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a9ff6050777c3f21c6591e17bc3cac63 2023-07-21 18:14:56,434 INFO [StoreOpener-a9ff6050777c3f21c6591e17bc3cac63-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region a9ff6050777c3f21c6591e17bc3cac63 2023-07-21 18:14:56,435 DEBUG [StoreOpener-a9ff6050777c3f21c6591e17bc3cac63-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/quota/a9ff6050777c3f21c6591e17bc3cac63/q 2023-07-21 18:14:56,435 DEBUG [StoreOpener-a9ff6050777c3f21c6591e17bc3cac63-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/quota/a9ff6050777c3f21c6591e17bc3cac63/q 2023-07-21 18:14:56,436 INFO [StoreOpener-a9ff6050777c3f21c6591e17bc3cac63-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a9ff6050777c3f21c6591e17bc3cac63 columnFamilyName q 2023-07-21 18:14:56,436 INFO [StoreOpener-a9ff6050777c3f21c6591e17bc3cac63-1] regionserver.HStore(310): Store=a9ff6050777c3f21c6591e17bc3cac63/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:56,436 INFO [StoreOpener-a9ff6050777c3f21c6591e17bc3cac63-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region a9ff6050777c3f21c6591e17bc3cac63 2023-07-21 18:14:56,438 DEBUG [StoreOpener-a9ff6050777c3f21c6591e17bc3cac63-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/quota/a9ff6050777c3f21c6591e17bc3cac63/u 2023-07-21 18:14:56,438 DEBUG [StoreOpener-a9ff6050777c3f21c6591e17bc3cac63-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/quota/a9ff6050777c3f21c6591e17bc3cac63/u 2023-07-21 18:14:56,438 INFO [StoreOpener-a9ff6050777c3f21c6591e17bc3cac63-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a9ff6050777c3f21c6591e17bc3cac63 columnFamilyName u 2023-07-21 18:14:56,438 INFO [StoreOpener-a9ff6050777c3f21c6591e17bc3cac63-1] regionserver.HStore(310): Store=a9ff6050777c3f21c6591e17bc3cac63/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:56,439 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/quota/a9ff6050777c3f21c6591e17bc3cac63 2023-07-21 18:14:56,439 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/quota/a9ff6050777c3f21c6591e17bc3cac63 2023-07-21 18:14:56,441 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-21 18:14:56,443 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a9ff6050777c3f21c6591e17bc3cac63 2023-07-21 18:14:56,445 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/quota/a9ff6050777c3f21c6591e17bc3cac63/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:56,446 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a9ff6050777c3f21c6591e17bc3cac63; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10605937120, jitterRate=-0.012245133519172668}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-21 18:14:56,446 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a9ff6050777c3f21c6591e17bc3cac63: 2023-07-21 18:14:56,447 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689963296077.a9ff6050777c3f21c6591e17bc3cac63., pid=15, masterSystemTime=1689963296428 2023-07-21 18:14:56,448 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689963296077.a9ff6050777c3f21c6591e17bc3cac63. 2023-07-21 18:14:56,448 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689963296077.a9ff6050777c3f21c6591e17bc3cac63. 2023-07-21 18:14:56,449 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=a9ff6050777c3f21c6591e17bc3cac63, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34289,1689963294016 2023-07-21 18:14:56,449 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689963296077.a9ff6050777c3f21c6591e17bc3cac63.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689963296449"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963296449"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963296449"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963296449"}]},"ts":"1689963296449"} 2023-07-21 18:14:56,452 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-21 18:14:56,452 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; OpenRegionProcedure a9ff6050777c3f21c6591e17bc3cac63, server=jenkins-hbase4.apache.org,34289,1689963294016 in 173 msec 2023-07-21 18:14:56,454 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-21 18:14:56,454 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=a9ff6050777c3f21c6591e17bc3cac63, ASSIGN in 329 msec 2023-07-21 18:14:56,455 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 18:14:56,455 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963296455"}]},"ts":"1689963296455"} 2023-07-21 18:14:56,457 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-21 18:14:56,460 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 18:14:56,462 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 384 msec 2023-07-21 18:14:56,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 18:14:56,582 INFO [jenkins-hbase4:46525] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 18:14:56,583 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=cc2cc14400a70c5bbc35b86e70cf0c76, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43427,1689963294179 2023-07-21 18:14:56,584 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689963296375.cc2cc14400a70c5bbc35b86e70cf0c76.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689963296583"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963296583"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963296583"}]},"ts":"1689963296583"} 2023-07-21 18:14:56,585 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure cc2cc14400a70c5bbc35b86e70cf0c76, server=jenkins-hbase4.apache.org,43427,1689963294179}] 2023-07-21 18:14:56,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 18:14:56,741 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689963296375.cc2cc14400a70c5bbc35b86e70cf0c76. 2023-07-21 18:14:56,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cc2cc14400a70c5bbc35b86e70cf0c76, NAME => 'np1:table1,,1689963296375.cc2cc14400a70c5bbc35b86e70cf0c76.', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:14:56,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 cc2cc14400a70c5bbc35b86e70cf0c76 2023-07-21 18:14:56,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689963296375.cc2cc14400a70c5bbc35b86e70cf0c76.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:14:56,742 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cc2cc14400a70c5bbc35b86e70cf0c76 2023-07-21 18:14:56,742 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cc2cc14400a70c5bbc35b86e70cf0c76 2023-07-21 18:14:56,743 INFO [StoreOpener-cc2cc14400a70c5bbc35b86e70cf0c76-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region cc2cc14400a70c5bbc35b86e70cf0c76 2023-07-21 18:14:56,744 DEBUG [StoreOpener-cc2cc14400a70c5bbc35b86e70cf0c76-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/np1/table1/cc2cc14400a70c5bbc35b86e70cf0c76/fam1 2023-07-21 18:14:56,744 DEBUG [StoreOpener-cc2cc14400a70c5bbc35b86e70cf0c76-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/np1/table1/cc2cc14400a70c5bbc35b86e70cf0c76/fam1 2023-07-21 18:14:56,744 INFO [StoreOpener-cc2cc14400a70c5bbc35b86e70cf0c76-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cc2cc14400a70c5bbc35b86e70cf0c76 columnFamilyName fam1 2023-07-21 18:14:56,745 INFO [StoreOpener-cc2cc14400a70c5bbc35b86e70cf0c76-1] regionserver.HStore(310): Store=cc2cc14400a70c5bbc35b86e70cf0c76/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:14:56,746 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/np1/table1/cc2cc14400a70c5bbc35b86e70cf0c76 2023-07-21 18:14:56,746 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/np1/table1/cc2cc14400a70c5bbc35b86e70cf0c76 2023-07-21 18:14:56,748 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cc2cc14400a70c5bbc35b86e70cf0c76 2023-07-21 18:14:56,749 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/np1/table1/cc2cc14400a70c5bbc35b86e70cf0c76/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:14:56,750 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cc2cc14400a70c5bbc35b86e70cf0c76; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11914296320, jitterRate=0.10960531234741211}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:14:56,750 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cc2cc14400a70c5bbc35b86e70cf0c76: 2023-07-21 18:14:56,751 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689963296375.cc2cc14400a70c5bbc35b86e70cf0c76., pid=18, masterSystemTime=1689963296737 2023-07-21 18:14:56,752 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689963296375.cc2cc14400a70c5bbc35b86e70cf0c76. 2023-07-21 18:14:56,752 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689963296375.cc2cc14400a70c5bbc35b86e70cf0c76. 2023-07-21 18:14:56,752 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=cc2cc14400a70c5bbc35b86e70cf0c76, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43427,1689963294179 2023-07-21 18:14:56,753 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689963296375.cc2cc14400a70c5bbc35b86e70cf0c76.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689963296752"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963296752"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963296752"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963296752"}]},"ts":"1689963296752"} 2023-07-21 18:14:56,755 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-21 18:14:56,755 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure cc2cc14400a70c5bbc35b86e70cf0c76, server=jenkins-hbase4.apache.org,43427,1689963294179 in 169 msec 2023-07-21 18:14:56,757 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-21 18:14:56,757 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=cc2cc14400a70c5bbc35b86e70cf0c76, ASSIGN in 325 msec 2023-07-21 18:14:56,757 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 18:14:56,757 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963296757"}]},"ts":"1689963296757"} 2023-07-21 18:14:56,758 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-21 18:14:56,760 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 18:14:56,761 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; CreateTableProcedure table=np1:table1 in 385 msec 2023-07-21 18:14:56,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 18:14:56,984 INFO [Listener at localhost/40193] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 16 completed 2023-07-21 18:14:56,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 18:14:56,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-21 18:14:56,988 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 18:14:56,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-21 18:14:56,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-21 18:14:57,008 INFO [PEWorker-4] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=22 msec 2023-07-21 18:14:57,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-21 18:14:57,093 INFO [Listener at localhost/40193] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-21 18:14:57,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:14:57,095 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:14:57,096 INFO [Listener at localhost/40193] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-21 18:14:57,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-21 18:14:57,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-21 18:14:57,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 18:14:57,100 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963297100"}]},"ts":"1689963297100"} 2023-07-21 18:14:57,101 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-21 18:14:57,103 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-21 18:14:57,104 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=cc2cc14400a70c5bbc35b86e70cf0c76, UNASSIGN}] 2023-07-21 18:14:57,107 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=cc2cc14400a70c5bbc35b86e70cf0c76, UNASSIGN 2023-07-21 18:14:57,107 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=cc2cc14400a70c5bbc35b86e70cf0c76, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43427,1689963294179 2023-07-21 18:14:57,107 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689963296375.cc2cc14400a70c5bbc35b86e70cf0c76.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689963297107"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963297107"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963297107"}]},"ts":"1689963297107"} 2023-07-21 18:14:57,109 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure cc2cc14400a70c5bbc35b86e70cf0c76, server=jenkins-hbase4.apache.org,43427,1689963294179}] 2023-07-21 18:14:57,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 18:14:57,261 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cc2cc14400a70c5bbc35b86e70cf0c76 2023-07-21 18:14:57,262 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cc2cc14400a70c5bbc35b86e70cf0c76, disabling compactions & flushes 2023-07-21 18:14:57,262 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689963296375.cc2cc14400a70c5bbc35b86e70cf0c76. 2023-07-21 18:14:57,262 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689963296375.cc2cc14400a70c5bbc35b86e70cf0c76. 2023-07-21 18:14:57,262 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689963296375.cc2cc14400a70c5bbc35b86e70cf0c76. after waiting 0 ms 2023-07-21 18:14:57,262 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689963296375.cc2cc14400a70c5bbc35b86e70cf0c76. 2023-07-21 18:14:57,265 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/np1/table1/cc2cc14400a70c5bbc35b86e70cf0c76/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 18:14:57,266 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689963296375.cc2cc14400a70c5bbc35b86e70cf0c76. 2023-07-21 18:14:57,266 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cc2cc14400a70c5bbc35b86e70cf0c76: 2023-07-21 18:14:57,268 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cc2cc14400a70c5bbc35b86e70cf0c76 2023-07-21 18:14:57,268 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=cc2cc14400a70c5bbc35b86e70cf0c76, regionState=CLOSED 2023-07-21 18:14:57,268 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689963296375.cc2cc14400a70c5bbc35b86e70cf0c76.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689963297268"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963297268"}]},"ts":"1689963297268"} 2023-07-21 18:14:57,271 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-21 18:14:57,271 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure cc2cc14400a70c5bbc35b86e70cf0c76, server=jenkins-hbase4.apache.org,43427,1689963294179 in 161 msec 2023-07-21 18:14:57,272 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-21 18:14:57,272 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=cc2cc14400a70c5bbc35b86e70cf0c76, UNASSIGN in 167 msec 2023-07-21 18:14:57,272 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963297272"}]},"ts":"1689963297272"} 2023-07-21 18:14:57,273 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-21 18:14:57,275 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-21 18:14:57,277 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 178 msec 2023-07-21 18:14:57,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 18:14:57,402 INFO [Listener at localhost/40193] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-21 18:14:57,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-21 18:14:57,403 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-21 18:14:57,405 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 18:14:57,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-21 18:14:57,406 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 18:14:57,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:14:57,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 18:14:57,409 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/.tmp/data/np1/table1/cc2cc14400a70c5bbc35b86e70cf0c76 2023-07-21 18:14:57,411 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/.tmp/data/np1/table1/cc2cc14400a70c5bbc35b86e70cf0c76/fam1, FileablePath, hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/.tmp/data/np1/table1/cc2cc14400a70c5bbc35b86e70cf0c76/recovered.edits] 2023-07-21 18:14:57,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-21 18:14:57,416 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/.tmp/data/np1/table1/cc2cc14400a70c5bbc35b86e70cf0c76/recovered.edits/4.seqid to hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/archive/data/np1/table1/cc2cc14400a70c5bbc35b86e70cf0c76/recovered.edits/4.seqid 2023-07-21 18:14:57,416 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/.tmp/data/np1/table1/cc2cc14400a70c5bbc35b86e70cf0c76 2023-07-21 18:14:57,416 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-21 18:14:57,418 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 18:14:57,420 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-21 18:14:57,421 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-21 18:14:57,422 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 18:14:57,422 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-21 18:14:57,422 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689963296375.cc2cc14400a70c5bbc35b86e70cf0c76.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689963297422"}]},"ts":"9223372036854775807"} 2023-07-21 18:14:57,424 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 18:14:57,424 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => cc2cc14400a70c5bbc35b86e70cf0c76, NAME => 'np1:table1,,1689963296375.cc2cc14400a70c5bbc35b86e70cf0c76.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 18:14:57,424 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-21 18:14:57,424 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689963297424"}]},"ts":"9223372036854775807"} 2023-07-21 18:14:57,425 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-21 18:14:57,428 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 18:14:57,429 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 25 msec 2023-07-21 18:14:57,513 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-21 18:14:57,514 INFO [Listener at localhost/40193] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-21 18:14:57,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-21 18:14:57,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-21 18:14:57,527 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 18:14:57,530 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 18:14:57,533 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 18:14:57,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-21 18:14:57,534 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-21 18:14:57,534 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 18:14:57,535 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 18:14:57,539 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 18:14:57,542 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 20 msec 2023-07-21 18:14:57,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46525] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-21 18:14:57,634 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-21 18:14:57,635 INFO [Listener at localhost/40193] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-21 18:14:57,635 DEBUG [Listener at localhost/40193] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3b249f64 to 127.0.0.1:51543 2023-07-21 18:14:57,635 DEBUG [Listener at localhost/40193] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:14:57,635 DEBUG [Listener at localhost/40193] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-21 18:14:57,635 DEBUG [Listener at localhost/40193] util.JVMClusterUtil(257): Found active master hash=1709702623, stopped=false 2023-07-21 18:14:57,635 DEBUG [Listener at localhost/40193] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 18:14:57,635 DEBUG [Listener at localhost/40193] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 18:14:57,635 DEBUG [Listener at localhost/40193] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-21 18:14:57,635 INFO [Listener at localhost/40193] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,46525,1689963293524 2023-07-21 18:14:57,638 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:43427-0x1018917ee3e0003, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 18:14:57,638 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:44873-0x1018917ee3e0001, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 18:14:57,638 INFO [Listener at localhost/40193] procedure2.ProcedureExecutor(629): Stopping 2023-07-21 18:14:57,638 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 18:14:57,638 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:14:57,638 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:34289-0x1018917ee3e0002, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 18:14:57,640 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43427-0x1018917ee3e0003, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 18:14:57,640 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44873-0x1018917ee3e0001, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 18:14:57,640 DEBUG [Listener at localhost/40193] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x642a6a7d to 127.0.0.1:51543 2023-07-21 18:14:57,640 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34289-0x1018917ee3e0002, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 18:14:57,640 DEBUG [Listener at localhost/40193] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:14:57,640 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 18:14:57,640 INFO [Listener at localhost/40193] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44873,1689963293765' ***** 2023-07-21 18:14:57,641 INFO [Listener at localhost/40193] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 18:14:57,641 INFO [Listener at localhost/40193] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34289,1689963294016' ***** 2023-07-21 18:14:57,641 INFO [Listener at localhost/40193] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 18:14:57,641 INFO [RS:0;jenkins-hbase4:44873] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 18:14:57,641 INFO [RS:1;jenkins-hbase4:34289] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 18:14:57,641 INFO [Listener at localhost/40193] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43427,1689963294179' ***** 2023-07-21 18:14:57,641 INFO [Listener at localhost/40193] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 18:14:57,643 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 18:14:57,646 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 18:14:57,643 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 18:14:57,644 INFO [RS:2;jenkins-hbase4:43427] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 18:14:57,653 INFO [RS:0;jenkins-hbase4:44873] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1def00ae{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 18:14:57,653 INFO [RS:2;jenkins-hbase4:43427] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7ee0305{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 18:14:57,653 INFO [RS:1;jenkins-hbase4:34289] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@36f809c4{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 18:14:57,653 INFO [RS:0;jenkins-hbase4:44873] server.AbstractConnector(383): Stopped ServerConnector@135b9bfe{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 18:14:57,653 INFO [RS:2;jenkins-hbase4:43427] server.AbstractConnector(383): Stopped ServerConnector@9edd795{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 18:14:57,653 INFO [RS:0;jenkins-hbase4:44873] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 18:14:57,654 INFO [RS:2;jenkins-hbase4:43427] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 18:14:57,654 INFO [RS:1;jenkins-hbase4:34289] server.AbstractConnector(383): Stopped ServerConnector@34ebe5e2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 18:14:57,656 INFO [RS:1;jenkins-hbase4:34289] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 18:14:57,656 INFO [RS:2;jenkins-hbase4:43427] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@778bcb86{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 18:14:57,656 INFO [RS:0;jenkins-hbase4:44873] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@37381802{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 18:14:57,656 INFO [RS:2;jenkins-hbase4:43427] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5a7753c6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/hadoop.log.dir/,STOPPED} 2023-07-21 18:14:57,656 INFO [RS:1;jenkins-hbase4:34289] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@29771e0d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 18:14:57,656 INFO [RS:1;jenkins-hbase4:34289] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@23bc259{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/hadoop.log.dir/,STOPPED} 2023-07-21 18:14:57,656 INFO [RS:0;jenkins-hbase4:44873] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5ebe50f9{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/hadoop.log.dir/,STOPPED} 2023-07-21 18:14:57,656 INFO [RS:2;jenkins-hbase4:43427] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 18:14:57,657 INFO [RS:2;jenkins-hbase4:43427] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 18:14:57,657 INFO [RS:2;jenkins-hbase4:43427] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 18:14:57,657 INFO [RS:2;jenkins-hbase4:43427] regionserver.HRegionServer(3305): Received CLOSE for 204abb6332db88cb6ab34dbd3f1a85b9 2023-07-21 18:14:57,657 INFO [RS:0;jenkins-hbase4:44873] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 18:14:57,657 INFO [RS:2;jenkins-hbase4:43427] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43427,1689963294179 2023-07-21 18:14:57,657 INFO [RS:0;jenkins-hbase4:44873] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 18:14:57,658 DEBUG [RS:2;jenkins-hbase4:43427] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x317ee0ad to 127.0.0.1:51543 2023-07-21 18:14:57,658 DEBUG [RS:2;jenkins-hbase4:43427] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:14:57,659 INFO [RS:2;jenkins-hbase4:43427] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 18:14:57,659 INFO [RS:1;jenkins-hbase4:34289] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 18:14:57,658 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 204abb6332db88cb6ab34dbd3f1a85b9, disabling compactions & flushes 2023-07-21 18:14:57,659 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 18:14:57,659 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689963295410.204abb6332db88cb6ab34dbd3f1a85b9. 2023-07-21 18:14:57,660 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689963295410.204abb6332db88cb6ab34dbd3f1a85b9. 2023-07-21 18:14:57,660 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689963295410.204abb6332db88cb6ab34dbd3f1a85b9. after waiting 0 ms 2023-07-21 18:14:57,660 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689963295410.204abb6332db88cb6ab34dbd3f1a85b9. 2023-07-21 18:14:57,660 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 204abb6332db88cb6ab34dbd3f1a85b9 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-21 18:14:57,659 INFO [RS:1;jenkins-hbase4:34289] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 18:14:57,659 DEBUG [RS:2;jenkins-hbase4:43427] regionserver.HRegionServer(1478): Online Regions={204abb6332db88cb6ab34dbd3f1a85b9=hbase:namespace,,1689963295410.204abb6332db88cb6ab34dbd3f1a85b9.} 2023-07-21 18:14:57,658 INFO [RS:0;jenkins-hbase4:44873] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 18:14:57,660 INFO [RS:0;jenkins-hbase4:44873] regionserver.HRegionServer(3305): Received CLOSE for 2fbc63f2ceb29c1687363312a4d6900f 2023-07-21 18:14:57,660 INFO [RS:1;jenkins-hbase4:34289] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 18:14:57,660 DEBUG [RS:2;jenkins-hbase4:43427] regionserver.HRegionServer(1504): Waiting on 204abb6332db88cb6ab34dbd3f1a85b9 2023-07-21 18:14:57,660 INFO [RS:1;jenkins-hbase4:34289] regionserver.HRegionServer(3305): Received CLOSE for a9ff6050777c3f21c6591e17bc3cac63 2023-07-21 18:14:57,660 INFO [RS:0;jenkins-hbase4:44873] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44873,1689963293765 2023-07-21 18:14:57,661 INFO [RS:1;jenkins-hbase4:34289] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34289,1689963294016 2023-07-21 18:14:57,661 DEBUG [RS:1;jenkins-hbase4:34289] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x47b9e706 to 127.0.0.1:51543 2023-07-21 18:14:57,661 DEBUG [RS:1;jenkins-hbase4:34289] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:14:57,661 DEBUG [RS:0;jenkins-hbase4:44873] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x723d6c07 to 127.0.0.1:51543 2023-07-21 18:14:57,661 INFO [RS:1;jenkins-hbase4:34289] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 18:14:57,661 DEBUG [RS:0;jenkins-hbase4:44873] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:14:57,661 INFO [RS:1;jenkins-hbase4:34289] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 18:14:57,663 INFO [RS:0;jenkins-hbase4:44873] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 18:14:57,663 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a9ff6050777c3f21c6591e17bc3cac63, disabling compactions & flushes 2023-07-21 18:14:57,663 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2fbc63f2ceb29c1687363312a4d6900f, disabling compactions & flushes 2023-07-21 18:14:57,663 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689963296077.a9ff6050777c3f21c6591e17bc3cac63. 2023-07-21 18:14:57,663 DEBUG [RS:0;jenkins-hbase4:44873] regionserver.HRegionServer(1478): Online Regions={2fbc63f2ceb29c1687363312a4d6900f=hbase:rsgroup,,1689963295654.2fbc63f2ceb29c1687363312a4d6900f.} 2023-07-21 18:14:57,663 INFO [RS:1;jenkins-hbase4:34289] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 18:14:57,663 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689963296077.a9ff6050777c3f21c6591e17bc3cac63. 2023-07-21 18:14:57,663 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689963295654.2fbc63f2ceb29c1687363312a4d6900f. 2023-07-21 18:14:57,663 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689963296077.a9ff6050777c3f21c6591e17bc3cac63. after waiting 0 ms 2023-07-21 18:14:57,663 INFO [RS:1;jenkins-hbase4:34289] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-21 18:14:57,663 DEBUG [RS:0;jenkins-hbase4:44873] regionserver.HRegionServer(1504): Waiting on 2fbc63f2ceb29c1687363312a4d6900f 2023-07-21 18:14:57,663 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689963296077.a9ff6050777c3f21c6591e17bc3cac63. 2023-07-21 18:14:57,663 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689963295654.2fbc63f2ceb29c1687363312a4d6900f. 2023-07-21 18:14:57,664 INFO [RS:1;jenkins-hbase4:34289] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-21 18:14:57,664 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689963295654.2fbc63f2ceb29c1687363312a4d6900f. after waiting 0 ms 2023-07-21 18:14:57,664 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689963295654.2fbc63f2ceb29c1687363312a4d6900f. 2023-07-21 18:14:57,664 DEBUG [RS:1;jenkins-hbase4:34289] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, a9ff6050777c3f21c6591e17bc3cac63=hbase:quota,,1689963296077.a9ff6050777c3f21c6591e17bc3cac63.} 2023-07-21 18:14:57,664 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 2fbc63f2ceb29c1687363312a4d6900f 1/1 column families, dataSize=633 B heapSize=1.09 KB 2023-07-21 18:14:57,664 DEBUG [RS:1;jenkins-hbase4:34289] regionserver.HRegionServer(1504): Waiting on 1588230740, a9ff6050777c3f21c6591e17bc3cac63 2023-07-21 18:14:57,664 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 18:14:57,664 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 18:14:57,664 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 18:14:57,664 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 18:14:57,664 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 18:14:57,664 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-21 18:14:57,668 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-21 18:14:57,668 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-21 18:14:57,669 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/quota/a9ff6050777c3f21c6591e17bc3cac63/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 18:14:57,670 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689963296077.a9ff6050777c3f21c6591e17bc3cac63. 2023-07-21 18:14:57,670 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a9ff6050777c3f21c6591e17bc3cac63: 2023-07-21 18:14:57,670 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689963296077.a9ff6050777c3f21c6591e17bc3cac63. 2023-07-21 18:14:57,682 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/namespace/204abb6332db88cb6ab34dbd3f1a85b9/.tmp/info/7c17b6ef83914681a2696247bb4aaa5f 2023-07-21 18:14:57,687 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=633 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/rsgroup/2fbc63f2ceb29c1687363312a4d6900f/.tmp/m/df102f7ec2334abd88a7368624dfe75a 2023-07-21 18:14:57,691 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740/.tmp/info/c3850319e17844f59a55d6a4c2b595df 2023-07-21 18:14:57,692 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7c17b6ef83914681a2696247bb4aaa5f 2023-07-21 18:14:57,694 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/namespace/204abb6332db88cb6ab34dbd3f1a85b9/.tmp/info/7c17b6ef83914681a2696247bb4aaa5f as hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/namespace/204abb6332db88cb6ab34dbd3f1a85b9/info/7c17b6ef83914681a2696247bb4aaa5f 2023-07-21 18:14:57,698 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/rsgroup/2fbc63f2ceb29c1687363312a4d6900f/.tmp/m/df102f7ec2334abd88a7368624dfe75a as hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/rsgroup/2fbc63f2ceb29c1687363312a4d6900f/m/df102f7ec2334abd88a7368624dfe75a 2023-07-21 18:14:57,700 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c3850319e17844f59a55d6a4c2b595df 2023-07-21 18:14:57,702 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7c17b6ef83914681a2696247bb4aaa5f 2023-07-21 18:14:57,702 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/namespace/204abb6332db88cb6ab34dbd3f1a85b9/info/7c17b6ef83914681a2696247bb4aaa5f, entries=3, sequenceid=8, filesize=5.0 K 2023-07-21 18:14:57,705 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 204abb6332db88cb6ab34dbd3f1a85b9 in 45ms, sequenceid=8, compaction requested=false 2023-07-21 18:14:57,705 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-21 18:14:57,705 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-21 18:14:57,705 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-21 18:14:57,706 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/rsgroup/2fbc63f2ceb29c1687363312a4d6900f/m/df102f7ec2334abd88a7368624dfe75a, entries=1, sequenceid=7, filesize=4.9 K 2023-07-21 18:14:57,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~633 B/633, heapSize ~1.07 KB/1096, currentSize=0 B/0 for 2fbc63f2ceb29c1687363312a4d6900f in 43ms, sequenceid=7, compaction requested=false 2023-07-21 18:14:57,707 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-21 18:14:57,727 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/rsgroup/2fbc63f2ceb29c1687363312a4d6900f/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-21 18:14:57,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 18:14:57,728 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689963295654.2fbc63f2ceb29c1687363312a4d6900f. 2023-07-21 18:14:57,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2fbc63f2ceb29c1687363312a4d6900f: 2023-07-21 18:14:57,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689963295654.2fbc63f2ceb29c1687363312a4d6900f. 2023-07-21 18:14:57,730 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/namespace/204abb6332db88cb6ab34dbd3f1a85b9/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-21 18:14:57,731 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740/.tmp/rep_barrier/33c64cac28f94bfa8a5c4921f61592e2 2023-07-21 18:14:57,731 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689963295410.204abb6332db88cb6ab34dbd3f1a85b9. 2023-07-21 18:14:57,731 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 204abb6332db88cb6ab34dbd3f1a85b9: 2023-07-21 18:14:57,731 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689963295410.204abb6332db88cb6ab34dbd3f1a85b9. 2023-07-21 18:14:57,732 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 18:14:57,738 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 33c64cac28f94bfa8a5c4921f61592e2 2023-07-21 18:14:57,743 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 18:14:57,760 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740/.tmp/table/7f54ee4e06fc40c59866c484b23ccb6d 2023-07-21 18:14:57,765 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7f54ee4e06fc40c59866c484b23ccb6d 2023-07-21 18:14:57,766 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740/.tmp/info/c3850319e17844f59a55d6a4c2b595df as hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740/info/c3850319e17844f59a55d6a4c2b595df 2023-07-21 18:14:57,773 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c3850319e17844f59a55d6a4c2b595df 2023-07-21 18:14:57,773 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740/info/c3850319e17844f59a55d6a4c2b595df, entries=32, sequenceid=31, filesize=8.5 K 2023-07-21 18:14:57,774 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740/.tmp/rep_barrier/33c64cac28f94bfa8a5c4921f61592e2 as hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740/rep_barrier/33c64cac28f94bfa8a5c4921f61592e2 2023-07-21 18:14:57,779 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 33c64cac28f94bfa8a5c4921f61592e2 2023-07-21 18:14:57,779 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740/rep_barrier/33c64cac28f94bfa8a5c4921f61592e2, entries=1, sequenceid=31, filesize=4.9 K 2023-07-21 18:14:57,780 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740/.tmp/table/7f54ee4e06fc40c59866c484b23ccb6d as hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740/table/7f54ee4e06fc40c59866c484b23ccb6d 2023-07-21 18:14:57,786 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7f54ee4e06fc40c59866c484b23ccb6d 2023-07-21 18:14:57,787 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740/table/7f54ee4e06fc40c59866c484b23ccb6d, entries=8, sequenceid=31, filesize=5.2 K 2023-07-21 18:14:57,788 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 123ms, sequenceid=31, compaction requested=false 2023-07-21 18:14:57,788 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-21 18:14:57,799 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-21 18:14:57,800 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 18:14:57,800 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 18:14:57,800 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 18:14:57,800 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-21 18:14:57,860 INFO [RS:2;jenkins-hbase4:43427] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43427,1689963294179; all regions closed. 2023-07-21 18:14:57,861 DEBUG [RS:2;jenkins-hbase4:43427] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-21 18:14:57,864 INFO [RS:0;jenkins-hbase4:44873] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44873,1689963293765; all regions closed. 2023-07-21 18:14:57,864 DEBUG [RS:0;jenkins-hbase4:44873] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-21 18:14:57,864 INFO [RS:1;jenkins-hbase4:34289] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34289,1689963294016; all regions closed. 2023-07-21 18:14:57,864 DEBUG [RS:1;jenkins-hbase4:34289] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-21 18:14:57,872 DEBUG [RS:2;jenkins-hbase4:43427] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/oldWALs 2023-07-21 18:14:57,872 INFO [RS:2;jenkins-hbase4:43427] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43427%2C1689963294179:(num 1689963294788) 2023-07-21 18:14:57,872 DEBUG [RS:2;jenkins-hbase4:43427] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:14:57,872 INFO [RS:2;jenkins-hbase4:43427] regionserver.LeaseManager(133): Closed leases 2023-07-21 18:14:57,873 INFO [RS:2;jenkins-hbase4:43427] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 18:14:57,873 INFO [RS:2;jenkins-hbase4:43427] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 18:14:57,873 INFO [RS:2;jenkins-hbase4:43427] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 18:14:57,873 INFO [RS:2;jenkins-hbase4:43427] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 18:14:57,874 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 18:14:57,875 INFO [RS:2;jenkins-hbase4:43427] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43427 2023-07-21 18:14:57,879 DEBUG [RS:0;jenkins-hbase4:44873] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/oldWALs 2023-07-21 18:14:57,879 INFO [RS:0;jenkins-hbase4:44873] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44873%2C1689963293765:(num 1689963294788) 2023-07-21 18:14:57,879 DEBUG [RS:0;jenkins-hbase4:44873] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:14:57,879 INFO [RS:0;jenkins-hbase4:44873] regionserver.LeaseManager(133): Closed leases 2023-07-21 18:14:57,879 INFO [RS:0;jenkins-hbase4:44873] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 18:14:57,879 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:14:57,879 INFO [RS:0;jenkins-hbase4:44873] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 18:14:57,879 INFO [RS:0;jenkins-hbase4:44873] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 18:14:57,879 INFO [RS:0;jenkins-hbase4:44873] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 18:14:57,879 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:43427-0x1018917ee3e0003, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43427,1689963294179 2023-07-21 18:14:57,879 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:34289-0x1018917ee3e0002, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43427,1689963294179 2023-07-21 18:14:57,879 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:44873-0x1018917ee3e0001, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43427,1689963294179 2023-07-21 18:14:57,880 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:34289-0x1018917ee3e0002, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:14:57,880 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:44873-0x1018917ee3e0001, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:14:57,880 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 18:14:57,880 INFO [RS:0;jenkins-hbase4:44873] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44873 2023-07-21 18:14:57,879 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:43427-0x1018917ee3e0003, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:14:57,879 DEBUG [RS:1;jenkins-hbase4:34289] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/oldWALs 2023-07-21 18:14:57,880 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43427,1689963294179] 2023-07-21 18:14:57,880 INFO [RS:1;jenkins-hbase4:34289] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34289%2C1689963294016.meta:.meta(num 1689963295341) 2023-07-21 18:14:57,883 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43427,1689963294179; numProcessing=1 2023-07-21 18:14:57,884 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:44873-0x1018917ee3e0001, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44873,1689963293765 2023-07-21 18:14:57,884 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:14:57,884 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:34289-0x1018917ee3e0002, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44873,1689963293765 2023-07-21 18:14:57,888 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43427,1689963294179 already deleted, retry=false 2023-07-21 18:14:57,889 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43427,1689963294179 expired; onlineServers=2 2023-07-21 18:14:57,894 DEBUG [RS:1;jenkins-hbase4:34289] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/oldWALs 2023-07-21 18:14:57,894 INFO [RS:1;jenkins-hbase4:34289] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34289%2C1689963294016:(num 1689963294772) 2023-07-21 18:14:57,894 DEBUG [RS:1;jenkins-hbase4:34289] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:14:57,894 INFO [RS:1;jenkins-hbase4:34289] regionserver.LeaseManager(133): Closed leases 2023-07-21 18:14:57,895 INFO [RS:1;jenkins-hbase4:34289] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 18:14:57,895 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 18:14:57,896 INFO [RS:1;jenkins-hbase4:34289] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34289 2023-07-21 18:14:57,988 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:44873-0x1018917ee3e0001, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:14:57,988 INFO [RS:0;jenkins-hbase4:44873] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44873,1689963293765; zookeeper connection closed. 2023-07-21 18:14:57,988 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:44873-0x1018917ee3e0001, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:14:57,989 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44873,1689963293765] 2023-07-21 18:14:57,989 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44873,1689963293765; numProcessing=2 2023-07-21 18:14:57,989 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6cbf4bd2] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6cbf4bd2 2023-07-21 18:14:57,990 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:14:57,990 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:34289-0x1018917ee3e0002, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34289,1689963294016 2023-07-21 18:14:57,991 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44873,1689963293765 already deleted, retry=false 2023-07-21 18:14:57,991 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44873,1689963293765 expired; onlineServers=1 2023-07-21 18:14:57,992 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34289,1689963294016] 2023-07-21 18:14:57,993 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34289,1689963294016; numProcessing=3 2023-07-21 18:14:57,994 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34289,1689963294016 already deleted, retry=false 2023-07-21 18:14:57,994 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34289,1689963294016 expired; onlineServers=0 2023-07-21 18:14:57,994 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46525,1689963293524' ***** 2023-07-21 18:14:57,994 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-21 18:14:57,994 DEBUG [M:0;jenkins-hbase4:46525] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@61d875bb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 18:14:57,995 INFO [M:0;jenkins-hbase4:46525] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 18:14:57,996 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-21 18:14:57,996 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:14:57,997 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 18:14:57,997 INFO [M:0;jenkins-hbase4:46525] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@59827a1c{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 18:14:57,997 INFO [M:0;jenkins-hbase4:46525] server.AbstractConnector(383): Stopped ServerConnector@1aa13f52{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 18:14:57,997 INFO [M:0;jenkins-hbase4:46525] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 18:14:57,997 INFO [M:0;jenkins-hbase4:46525] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7fe18136{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 18:14:57,997 INFO [M:0;jenkins-hbase4:46525] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@333c6535{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/hadoop.log.dir/,STOPPED} 2023-07-21 18:14:57,998 INFO [M:0;jenkins-hbase4:46525] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46525,1689963293524 2023-07-21 18:14:57,998 INFO [M:0;jenkins-hbase4:46525] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46525,1689963293524; all regions closed. 2023-07-21 18:14:57,998 DEBUG [M:0;jenkins-hbase4:46525] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:14:57,998 INFO [M:0;jenkins-hbase4:46525] master.HMaster(1491): Stopping master jetty server 2023-07-21 18:14:57,998 INFO [M:0;jenkins-hbase4:46525] server.AbstractConnector(383): Stopped ServerConnector@27dff1cb{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 18:14:57,999 DEBUG [M:0;jenkins-hbase4:46525] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-21 18:14:57,999 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-21 18:14:57,999 DEBUG [M:0;jenkins-hbase4:46525] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-21 18:14:57,999 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689963294572] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689963294572,5,FailOnTimeoutGroup] 2023-07-21 18:14:57,999 INFO [M:0;jenkins-hbase4:46525] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-21 18:14:57,999 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689963294572] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689963294572,5,FailOnTimeoutGroup] 2023-07-21 18:14:58,000 INFO [M:0;jenkins-hbase4:46525] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-21 18:14:58,001 INFO [M:0;jenkins-hbase4:46525] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 18:14:58,001 DEBUG [M:0;jenkins-hbase4:46525] master.HMaster(1512): Stopping service threads 2023-07-21 18:14:58,001 INFO [M:0;jenkins-hbase4:46525] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-21 18:14:58,001 ERROR [M:0;jenkins-hbase4:46525] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-21 18:14:58,001 INFO [M:0;jenkins-hbase4:46525] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-21 18:14:58,001 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-21 18:14:58,002 DEBUG [M:0;jenkins-hbase4:46525] zookeeper.ZKUtil(398): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-21 18:14:58,002 WARN [M:0;jenkins-hbase4:46525] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-21 18:14:58,002 INFO [M:0;jenkins-hbase4:46525] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-21 18:14:58,002 INFO [M:0;jenkins-hbase4:46525] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-21 18:14:58,002 DEBUG [M:0;jenkins-hbase4:46525] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 18:14:58,002 INFO [M:0;jenkins-hbase4:46525] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 18:14:58,002 DEBUG [M:0;jenkins-hbase4:46525] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 18:14:58,002 DEBUG [M:0;jenkins-hbase4:46525] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 18:14:58,002 DEBUG [M:0;jenkins-hbase4:46525] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 18:14:58,003 INFO [M:0;jenkins-hbase4:46525] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.96 KB heapSize=109.11 KB 2023-07-21 18:14:58,014 INFO [M:0;jenkins-hbase4:46525] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.96 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/e5c00151c57d48ba951c07c6c9b9c7d0 2023-07-21 18:14:58,021 DEBUG [M:0;jenkins-hbase4:46525] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/e5c00151c57d48ba951c07c6c9b9c7d0 as hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e5c00151c57d48ba951c07c6c9b9c7d0 2023-07-21 18:14:58,026 INFO [M:0;jenkins-hbase4:46525] regionserver.HStore(1080): Added hdfs://localhost:42925/user/jenkins/test-data/b4822a8d-07e4-d645-0dc5-910d28dce613/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e5c00151c57d48ba951c07c6c9b9c7d0, entries=24, sequenceid=194, filesize=12.4 K 2023-07-21 18:14:58,027 INFO [M:0;jenkins-hbase4:46525] regionserver.HRegion(2948): Finished flush of dataSize ~92.96 KB/95188, heapSize ~109.09 KB/111712, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 24ms, sequenceid=194, compaction requested=false 2023-07-21 18:14:58,028 INFO [M:0;jenkins-hbase4:46525] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 18:14:58,028 DEBUG [M:0;jenkins-hbase4:46525] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 18:14:58,033 INFO [M:0;jenkins-hbase4:46525] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-21 18:14:58,033 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 18:14:58,034 INFO [M:0;jenkins-hbase4:46525] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46525 2023-07-21 18:14:58,035 DEBUG [M:0;jenkins-hbase4:46525] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,46525,1689963293524 already deleted, retry=false 2023-07-21 18:14:58,137 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:34289-0x1018917ee3e0002, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:14:58,137 INFO [RS:1;jenkins-hbase4:34289] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34289,1689963294016; zookeeper connection closed. 2023-07-21 18:14:58,137 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:34289-0x1018917ee3e0002, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:14:58,139 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@277e73a5] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@277e73a5 2023-07-21 18:14:58,238 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:43427-0x1018917ee3e0003, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:14:58,238 INFO [RS:2;jenkins-hbase4:43427] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43427,1689963294179; zookeeper connection closed. 2023-07-21 18:14:58,238 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): regionserver:43427-0x1018917ee3e0003, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:14:58,238 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2a25bfe2] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2a25bfe2 2023-07-21 18:14:58,238 INFO [Listener at localhost/40193] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-21 18:14:58,338 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:14:58,338 INFO [M:0;jenkins-hbase4:46525] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46525,1689963293524; zookeeper connection closed. 2023-07-21 18:14:58,338 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): master:46525-0x1018917ee3e0000, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:14:58,339 WARN [Listener at localhost/40193] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 18:14:58,348 INFO [Listener at localhost/40193] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 18:14:58,453 WARN [BP-656315537-172.31.14.131-1689963292352 heartbeating to localhost/127.0.0.1:42925] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 18:14:58,453 WARN [BP-656315537-172.31.14.131-1689963292352 heartbeating to localhost/127.0.0.1:42925] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-656315537-172.31.14.131-1689963292352 (Datanode Uuid 52389ff1-2798-4118-9a33-3a4b143b6d06) service to localhost/127.0.0.1:42925 2023-07-21 18:14:58,454 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/cluster_d59f38d1-c0ba-73de-dc1a-6d970ef140a3/dfs/data/data5/current/BP-656315537-172.31.14.131-1689963292352] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 18:14:58,454 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/cluster_d59f38d1-c0ba-73de-dc1a-6d970ef140a3/dfs/data/data6/current/BP-656315537-172.31.14.131-1689963292352] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 18:14:58,455 WARN [Listener at localhost/40193] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 18:14:58,461 INFO [Listener at localhost/40193] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 18:14:58,566 WARN [BP-656315537-172.31.14.131-1689963292352 heartbeating to localhost/127.0.0.1:42925] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 18:14:58,566 WARN [BP-656315537-172.31.14.131-1689963292352 heartbeating to localhost/127.0.0.1:42925] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-656315537-172.31.14.131-1689963292352 (Datanode Uuid 595e82c8-4a9d-4e82-9c27-1545721fa011) service to localhost/127.0.0.1:42925 2023-07-21 18:14:58,567 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/cluster_d59f38d1-c0ba-73de-dc1a-6d970ef140a3/dfs/data/data3/current/BP-656315537-172.31.14.131-1689963292352] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 18:14:58,568 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/cluster_d59f38d1-c0ba-73de-dc1a-6d970ef140a3/dfs/data/data4/current/BP-656315537-172.31.14.131-1689963292352] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 18:14:58,569 WARN [Listener at localhost/40193] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 18:14:58,572 INFO [Listener at localhost/40193] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 18:14:58,677 WARN [BP-656315537-172.31.14.131-1689963292352 heartbeating to localhost/127.0.0.1:42925] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 18:14:58,677 WARN [BP-656315537-172.31.14.131-1689963292352 heartbeating to localhost/127.0.0.1:42925] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-656315537-172.31.14.131-1689963292352 (Datanode Uuid 7af9a7ae-a25e-425b-a226-36333ffe6a14) service to localhost/127.0.0.1:42925 2023-07-21 18:14:58,678 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/cluster_d59f38d1-c0ba-73de-dc1a-6d970ef140a3/dfs/data/data1/current/BP-656315537-172.31.14.131-1689963292352] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 18:14:58,678 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/cluster_d59f38d1-c0ba-73de-dc1a-6d970ef140a3/dfs/data/data2/current/BP-656315537-172.31.14.131-1689963292352] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 18:14:58,688 INFO [Listener at localhost/40193] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 18:14:58,804 INFO [Listener at localhost/40193] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-21 18:14:58,830 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-21 18:14:58,830 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-21 18:14:58,830 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/hadoop.log.dir so I do NOT create it in target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340 2023-07-21 18:14:58,830 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/83e677ee-a23a-7f9e-86d5-a16819d92b37/hadoop.tmp.dir so I do NOT create it in target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340 2023-07-21 18:14:58,830 INFO [Listener at localhost/40193] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/cluster_f875a085-4694-55b9-cdf9-eae6f0017a24, deleteOnExit=true 2023-07-21 18:14:58,830 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-21 18:14:58,830 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/test.cache.data in system properties and HBase conf 2023-07-21 18:14:58,830 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/hadoop.tmp.dir in system properties and HBase conf 2023-07-21 18:14:58,831 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/hadoop.log.dir in system properties and HBase conf 2023-07-21 18:14:58,831 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-21 18:14:58,831 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-21 18:14:58,831 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-21 18:14:58,831 DEBUG [Listener at localhost/40193] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-21 18:14:58,831 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-21 18:14:58,831 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-21 18:14:58,831 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-21 18:14:58,832 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 18:14:58,832 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-21 18:14:58,832 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-21 18:14:58,832 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 18:14:58,832 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 18:14:58,832 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-21 18:14:58,832 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/nfs.dump.dir in system properties and HBase conf 2023-07-21 18:14:58,832 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/java.io.tmpdir in system properties and HBase conf 2023-07-21 18:14:58,832 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 18:14:58,832 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-21 18:14:58,832 INFO [Listener at localhost/40193] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-21 18:14:58,836 WARN [Listener at localhost/40193] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 18:14:58,837 WARN [Listener at localhost/40193] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 18:14:58,876 WARN [Listener at localhost/40193] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 18:14:58,878 INFO [Listener at localhost/40193] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 18:14:58,882 INFO [Listener at localhost/40193] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/java.io.tmpdir/Jetty_localhost_41395_hdfs____.82z810/webapp 2023-07-21 18:14:58,902 DEBUG [Listener at localhost/40193-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1018917ee3e000a, quorum=127.0.0.1:51543, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-21 18:14:58,902 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1018917ee3e000a, quorum=127.0.0.1:51543, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-21 18:14:58,976 INFO [Listener at localhost/40193] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41395 2023-07-21 18:14:58,980 WARN [Listener at localhost/40193] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 18:14:58,980 WARN [Listener at localhost/40193] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 18:14:59,019 WARN [Listener at localhost/34709] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 18:14:59,036 WARN [Listener at localhost/34709] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 18:14:59,042 WARN [Listener at localhost/34709] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 18:14:59,043 INFO [Listener at localhost/34709] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 18:14:59,047 INFO [Listener at localhost/34709] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/java.io.tmpdir/Jetty_localhost_40723_datanode____ebnmoz/webapp 2023-07-21 18:14:59,142 INFO [Listener at localhost/34709] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40723 2023-07-21 18:14:59,150 WARN [Listener at localhost/45789] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 18:14:59,167 WARN [Listener at localhost/45789] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 18:14:59,170 WARN [Listener at localhost/45789] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 18:14:59,171 INFO [Listener at localhost/45789] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 18:14:59,174 INFO [Listener at localhost/45789] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/java.io.tmpdir/Jetty_localhost_35661_datanode____.rwxbd6/webapp 2023-07-21 18:14:59,248 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x909444a5460797e7: Processing first storage report for DS-fbf654ae-10e5-4a82-91b9-5bfa1822d1e5 from datanode dcc10afb-0f4c-4d50-81f9-ca2417f2128b 2023-07-21 18:14:59,249 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x909444a5460797e7: from storage DS-fbf654ae-10e5-4a82-91b9-5bfa1822d1e5 node DatanodeRegistration(127.0.0.1:42999, datanodeUuid=dcc10afb-0f4c-4d50-81f9-ca2417f2128b, infoPort=39555, infoSecurePort=0, ipcPort=45789, storageInfo=lv=-57;cid=testClusterID;nsid=642897950;c=1689963298839), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-21 18:14:59,249 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x909444a5460797e7: Processing first storage report for DS-90adb69a-b050-46da-863d-14678a795cf0 from datanode dcc10afb-0f4c-4d50-81f9-ca2417f2128b 2023-07-21 18:14:59,249 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x909444a5460797e7: from storage DS-90adb69a-b050-46da-863d-14678a795cf0 node DatanodeRegistration(127.0.0.1:42999, datanodeUuid=dcc10afb-0f4c-4d50-81f9-ca2417f2128b, infoPort=39555, infoSecurePort=0, ipcPort=45789, storageInfo=lv=-57;cid=testClusterID;nsid=642897950;c=1689963298839), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 18:14:59,273 INFO [Listener at localhost/45789] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35661 2023-07-21 18:14:59,282 WARN [Listener at localhost/38185] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 18:14:59,329 WARN [Listener at localhost/38185] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 18:14:59,332 WARN [Listener at localhost/38185] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 18:14:59,333 INFO [Listener at localhost/38185] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 18:14:59,342 INFO [Listener at localhost/38185] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/java.io.tmpdir/Jetty_localhost_36187_datanode____68kdg4/webapp 2023-07-21 18:14:59,427 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3f770f5b9194e710: Processing first storage report for DS-b045540e-f2f7-4c13-9b54-019e5d307dcf from datanode 018ddf14-dd77-49b8-9d78-1260d389af29 2023-07-21 18:14:59,427 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3f770f5b9194e710: from storage DS-b045540e-f2f7-4c13-9b54-019e5d307dcf node DatanodeRegistration(127.0.0.1:36115, datanodeUuid=018ddf14-dd77-49b8-9d78-1260d389af29, infoPort=43493, infoSecurePort=0, ipcPort=38185, storageInfo=lv=-57;cid=testClusterID;nsid=642897950;c=1689963298839), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 18:14:59,427 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3f770f5b9194e710: Processing first storage report for DS-68c9a7b0-eae2-4c16-a18a-28b4206411e6 from datanode 018ddf14-dd77-49b8-9d78-1260d389af29 2023-07-21 18:14:59,427 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3f770f5b9194e710: from storage DS-68c9a7b0-eae2-4c16-a18a-28b4206411e6 node DatanodeRegistration(127.0.0.1:36115, datanodeUuid=018ddf14-dd77-49b8-9d78-1260d389af29, infoPort=43493, infoSecurePort=0, ipcPort=38185, storageInfo=lv=-57;cid=testClusterID;nsid=642897950;c=1689963298839), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 18:14:59,466 INFO [Listener at localhost/38185] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36187 2023-07-21 18:14:59,474 WARN [Listener at localhost/43809] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 18:14:59,595 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa151bbe8f24fd051: Processing first storage report for DS-b1f46eb7-7fc9-4530-99e9-bb8bffad3df7 from datanode 545714ce-165b-4ec1-866e-18fe448c2a40 2023-07-21 18:14:59,595 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa151bbe8f24fd051: from storage DS-b1f46eb7-7fc9-4530-99e9-bb8bffad3df7 node DatanodeRegistration(127.0.0.1:35465, datanodeUuid=545714ce-165b-4ec1-866e-18fe448c2a40, infoPort=41019, infoSecurePort=0, ipcPort=43809, storageInfo=lv=-57;cid=testClusterID;nsid=642897950;c=1689963298839), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 18:14:59,595 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa151bbe8f24fd051: Processing first storage report for DS-4ec9e01c-f82e-4d31-b56a-5ade0170efe2 from datanode 545714ce-165b-4ec1-866e-18fe448c2a40 2023-07-21 18:14:59,595 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa151bbe8f24fd051: from storage DS-4ec9e01c-f82e-4d31-b56a-5ade0170efe2 node DatanodeRegistration(127.0.0.1:35465, datanodeUuid=545714ce-165b-4ec1-866e-18fe448c2a40, infoPort=41019, infoSecurePort=0, ipcPort=43809, storageInfo=lv=-57;cid=testClusterID;nsid=642897950;c=1689963298839), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 18:14:59,682 DEBUG [Listener at localhost/43809] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340 2023-07-21 18:14:59,684 INFO [Listener at localhost/43809] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/cluster_f875a085-4694-55b9-cdf9-eae6f0017a24/zookeeper_0, clientPort=60536, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/cluster_f875a085-4694-55b9-cdf9-eae6f0017a24/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/cluster_f875a085-4694-55b9-cdf9-eae6f0017a24/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-21 18:14:59,685 INFO [Listener at localhost/43809] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=60536 2023-07-21 18:14:59,685 INFO [Listener at localhost/43809] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:59,686 INFO [Listener at localhost/43809] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:59,705 INFO [Listener at localhost/43809] util.FSUtils(471): Created version file at hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1 with version=8 2023-07-21 18:14:59,706 INFO [Listener at localhost/43809] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:37139/user/jenkins/test-data/8f120add-fa7b-7be5-2f29-4f9e64072966/hbase-staging 2023-07-21 18:14:59,706 DEBUG [Listener at localhost/43809] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-21 18:14:59,706 DEBUG [Listener at localhost/43809] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-21 18:14:59,707 DEBUG [Listener at localhost/43809] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-21 18:14:59,707 DEBUG [Listener at localhost/43809] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-21 18:14:59,707 INFO [Listener at localhost/43809] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 18:14:59,707 INFO [Listener at localhost/43809] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:59,708 INFO [Listener at localhost/43809] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:59,708 INFO [Listener at localhost/43809] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 18:14:59,708 INFO [Listener at localhost/43809] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:59,708 INFO [Listener at localhost/43809] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 18:14:59,708 INFO [Listener at localhost/43809] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 18:14:59,708 INFO [Listener at localhost/43809] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34779 2023-07-21 18:14:59,709 INFO [Listener at localhost/43809] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:59,710 INFO [Listener at localhost/43809] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:59,711 INFO [Listener at localhost/43809] zookeeper.RecoverableZooKeeper(93): Process identifier=master:34779 connecting to ZooKeeper ensemble=127.0.0.1:60536 2023-07-21 18:14:59,719 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:347790x0, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 18:14:59,720 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:34779-0x101891806670000 connected 2023-07-21 18:14:59,732 DEBUG [Listener at localhost/43809] zookeeper.ZKUtil(164): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 18:14:59,733 DEBUG [Listener at localhost/43809] zookeeper.ZKUtil(164): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 18:14:59,733 DEBUG [Listener at localhost/43809] zookeeper.ZKUtil(164): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 18:14:59,734 DEBUG [Listener at localhost/43809] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34779 2023-07-21 18:14:59,734 DEBUG [Listener at localhost/43809] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34779 2023-07-21 18:14:59,734 DEBUG [Listener at localhost/43809] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34779 2023-07-21 18:14:59,734 DEBUG [Listener at localhost/43809] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34779 2023-07-21 18:14:59,735 DEBUG [Listener at localhost/43809] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34779 2023-07-21 18:14:59,736 INFO [Listener at localhost/43809] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 18:14:59,736 INFO [Listener at localhost/43809] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 18:14:59,736 INFO [Listener at localhost/43809] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 18:14:59,737 INFO [Listener at localhost/43809] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-21 18:14:59,737 INFO [Listener at localhost/43809] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 18:14:59,737 INFO [Listener at localhost/43809] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 18:14:59,737 INFO [Listener at localhost/43809] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 18:14:59,738 INFO [Listener at localhost/43809] http.HttpServer(1146): Jetty bound to port 39617 2023-07-21 18:14:59,738 INFO [Listener at localhost/43809] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 18:14:59,739 INFO [Listener at localhost/43809] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:59,739 INFO [Listener at localhost/43809] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7db7184a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/hadoop.log.dir/,AVAILABLE} 2023-07-21 18:14:59,740 INFO [Listener at localhost/43809] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:59,740 INFO [Listener at localhost/43809] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3693f4a4{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 18:14:59,856 INFO [Listener at localhost/43809] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 18:14:59,857 INFO [Listener at localhost/43809] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 18:14:59,857 INFO [Listener at localhost/43809] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 18:14:59,858 INFO [Listener at localhost/43809] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 18:14:59,859 INFO [Listener at localhost/43809] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:59,860 INFO [Listener at localhost/43809] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2ae61d88{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/java.io.tmpdir/jetty-0_0_0_0-39617-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8072012599303944904/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 18:14:59,861 INFO [Listener at localhost/43809] server.AbstractConnector(333): Started ServerConnector@1c0afd76{HTTP/1.1, (http/1.1)}{0.0.0.0:39617} 2023-07-21 18:14:59,861 INFO [Listener at localhost/43809] server.Server(415): Started @43995ms 2023-07-21 18:14:59,861 INFO [Listener at localhost/43809] master.HMaster(444): hbase.rootdir=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1, hbase.cluster.distributed=false 2023-07-21 18:14:59,875 INFO [Listener at localhost/43809] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 18:14:59,875 INFO [Listener at localhost/43809] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:59,875 INFO [Listener at localhost/43809] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:59,875 INFO [Listener at localhost/43809] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 18:14:59,875 INFO [Listener at localhost/43809] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:14:59,875 INFO [Listener at localhost/43809] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 18:14:59,875 INFO [Listener at localhost/43809] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 18:14:59,877 INFO [Listener at localhost/43809] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45925 2023-07-21 18:14:59,877 INFO [Listener at localhost/43809] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 18:14:59,878 DEBUG [Listener at localhost/43809] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 18:14:59,879 INFO [Listener at localhost/43809] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:59,880 INFO [Listener at localhost/43809] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:14:59,880 INFO [Listener at localhost/43809] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45925 connecting to ZooKeeper ensemble=127.0.0.1:60536 2023-07-21 18:14:59,884 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:459250x0, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 18:14:59,885 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45925-0x101891806670001 connected 2023-07-21 18:14:59,885 DEBUG [Listener at localhost/43809] zookeeper.ZKUtil(164): regionserver:45925-0x101891806670001, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 18:14:59,885 DEBUG [Listener at localhost/43809] zookeeper.ZKUtil(164): regionserver:45925-0x101891806670001, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 18:14:59,886 DEBUG [Listener at localhost/43809] zookeeper.ZKUtil(164): regionserver:45925-0x101891806670001, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 18:14:59,886 DEBUG [Listener at localhost/43809] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45925 2023-07-21 18:14:59,889 DEBUG [Listener at localhost/43809] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45925 2023-07-21 18:14:59,889 DEBUG [Listener at localhost/43809] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45925 2023-07-21 18:14:59,890 DEBUG [Listener at localhost/43809] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45925 2023-07-21 18:14:59,890 DEBUG [Listener at localhost/43809] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45925 2023-07-21 18:14:59,891 INFO [Listener at localhost/43809] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 18:14:59,892 INFO [Listener at localhost/43809] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 18:14:59,892 INFO [Listener at localhost/43809] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 18:14:59,892 INFO [Listener at localhost/43809] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 18:14:59,892 INFO [Listener at localhost/43809] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 18:14:59,892 INFO [Listener at localhost/43809] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 18:14:59,892 INFO [Listener at localhost/43809] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 18:14:59,893 INFO [Listener at localhost/43809] http.HttpServer(1146): Jetty bound to port 35771 2023-07-21 18:14:59,893 INFO [Listener at localhost/43809] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 18:14:59,894 INFO [Listener at localhost/43809] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:59,894 INFO [Listener at localhost/43809] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7d4bb916{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/hadoop.log.dir/,AVAILABLE} 2023-07-21 18:14:59,895 INFO [Listener at localhost/43809] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:14:59,895 INFO [Listener at localhost/43809] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@513b2579{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 18:15:00,018 INFO [Listener at localhost/43809] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 18:15:00,018 INFO [Listener at localhost/43809] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 18:15:00,019 INFO [Listener at localhost/43809] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 18:15:00,019 INFO [Listener at localhost/43809] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 18:15:00,020 INFO [Listener at localhost/43809] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:15:00,021 INFO [Listener at localhost/43809] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1056e046{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/java.io.tmpdir/jetty-0_0_0_0-35771-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4218113041775410633/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 18:15:00,022 INFO [Listener at localhost/43809] server.AbstractConnector(333): Started ServerConnector@42d0e97d{HTTP/1.1, (http/1.1)}{0.0.0.0:35771} 2023-07-21 18:15:00,022 INFO [Listener at localhost/43809] server.Server(415): Started @44156ms 2023-07-21 18:15:00,039 INFO [Listener at localhost/43809] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 18:15:00,039 INFO [Listener at localhost/43809] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:15:00,039 INFO [Listener at localhost/43809] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 18:15:00,040 INFO [Listener at localhost/43809] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 18:15:00,040 INFO [Listener at localhost/43809] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:15:00,040 INFO [Listener at localhost/43809] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 18:15:00,040 INFO [Listener at localhost/43809] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 18:15:00,041 INFO [Listener at localhost/43809] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34959 2023-07-21 18:15:00,041 INFO [Listener at localhost/43809] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 18:15:00,043 DEBUG [Listener at localhost/43809] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 18:15:00,043 INFO [Listener at localhost/43809] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:15:00,045 INFO [Listener at localhost/43809] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:15:00,046 INFO [Listener at localhost/43809] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34959 connecting to ZooKeeper ensemble=127.0.0.1:60536 2023-07-21 18:15:00,050 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:349590x0, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 18:15:00,051 DEBUG [Listener at localhost/43809] zookeeper.ZKUtil(164): regionserver:349590x0, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 18:15:00,052 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34959-0x101891806670002 connected 2023-07-21 18:15:00,052 DEBUG [Listener at localhost/43809] zookeeper.ZKUtil(164): regionserver:34959-0x101891806670002, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 18:15:00,052 DEBUG [Listener at localhost/43809] zookeeper.ZKUtil(164): regionserver:34959-0x101891806670002, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 18:15:00,055 DEBUG [Listener at localhost/43809] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34959 2023-07-21 18:15:00,055 DEBUG [Listener at localhost/43809] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34959 2023-07-21 18:15:00,056 DEBUG [Listener at localhost/43809] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34959 2023-07-21 18:15:00,057 DEBUG [Listener at localhost/43809] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34959 2023-07-21 18:15:00,058 DEBUG [Listener at localhost/43809] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34959 2023-07-21 18:15:00,060 INFO [Listener at localhost/43809] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 18:15:00,060 INFO [Listener at localhost/43809] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 18:15:00,060 INFO [Listener at localhost/43809] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 18:15:00,061 INFO [Listener at localhost/43809] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 18:15:00,061 INFO [Listener at localhost/43809] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 18:15:00,061 INFO [Listener at localhost/43809] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 18:15:00,061 INFO [Listener at localhost/43809] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 18:15:00,062 INFO [Listener at localhost/43809] http.HttpServer(1146): Jetty bound to port 36127 2023-07-21 18:15:00,062 INFO [Listener at localhost/43809] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 18:15:00,064 INFO [Listener at localhost/43809] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:15:00,065 INFO [Listener at localhost/43809] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@381ae7fe{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/hadoop.log.dir/,AVAILABLE} 2023-07-21 18:15:00,065 INFO [Listener at localhost/43809] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:15:00,065 INFO [Listener at localhost/43809] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@16c0f7e8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 18:15:00,183 INFO [Listener at localhost/43809] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 18:15:00,184 INFO [Listener at localhost/43809] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 18:15:00,184 INFO [Listener at localhost/43809] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 18:15:00,184 INFO [Listener at localhost/43809] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 18:15:00,185 INFO [Listener at localhost/43809] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:15:00,185 INFO [Listener at localhost/43809] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1f7499f6{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/java.io.tmpdir/jetty-0_0_0_0-36127-hbase-server-2_4_18-SNAPSHOT_jar-_-any-9125830790890601916/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 18:15:00,188 INFO [Listener at localhost/43809] server.AbstractConnector(333): Started ServerConnector@7423cc75{HTTP/1.1, (http/1.1)}{0.0.0.0:36127} 2023-07-21 18:15:00,188 INFO [Listener at localhost/43809] server.Server(415): Started @44321ms 2023-07-21 18:15:00,199 INFO [Listener at localhost/43809] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 18:15:00,199 INFO [Listener at localhost/43809] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:15:00,199 INFO [Listener at localhost/43809] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 18:15:00,199 INFO [Listener at localhost/43809] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 18:15:00,200 INFO [Listener at localhost/43809] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:15:00,200 INFO [Listener at localhost/43809] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 18:15:00,200 INFO [Listener at localhost/43809] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 18:15:00,200 INFO [Listener at localhost/43809] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44645 2023-07-21 18:15:00,201 INFO [Listener at localhost/43809] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 18:15:00,202 DEBUG [Listener at localhost/43809] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 18:15:00,202 INFO [Listener at localhost/43809] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:15:00,203 INFO [Listener at localhost/43809] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:15:00,204 INFO [Listener at localhost/43809] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44645 connecting to ZooKeeper ensemble=127.0.0.1:60536 2023-07-21 18:15:00,207 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:446450x0, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 18:15:00,209 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44645-0x101891806670003 connected 2023-07-21 18:15:00,209 DEBUG [Listener at localhost/43809] zookeeper.ZKUtil(164): regionserver:44645-0x101891806670003, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 18:15:00,209 DEBUG [Listener at localhost/43809] zookeeper.ZKUtil(164): regionserver:44645-0x101891806670003, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 18:15:00,209 DEBUG [Listener at localhost/43809] zookeeper.ZKUtil(164): regionserver:44645-0x101891806670003, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 18:15:00,210 DEBUG [Listener at localhost/43809] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44645 2023-07-21 18:15:00,210 DEBUG [Listener at localhost/43809] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44645 2023-07-21 18:15:00,210 DEBUG [Listener at localhost/43809] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44645 2023-07-21 18:15:00,211 DEBUG [Listener at localhost/43809] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44645 2023-07-21 18:15:00,212 DEBUG [Listener at localhost/43809] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44645 2023-07-21 18:15:00,213 INFO [Listener at localhost/43809] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 18:15:00,213 INFO [Listener at localhost/43809] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 18:15:00,213 INFO [Listener at localhost/43809] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 18:15:00,214 INFO [Listener at localhost/43809] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 18:15:00,214 INFO [Listener at localhost/43809] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 18:15:00,214 INFO [Listener at localhost/43809] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 18:15:00,214 INFO [Listener at localhost/43809] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 18:15:00,215 INFO [Listener at localhost/43809] http.HttpServer(1146): Jetty bound to port 42529 2023-07-21 18:15:00,215 INFO [Listener at localhost/43809] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 18:15:00,219 INFO [Listener at localhost/43809] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:15:00,219 INFO [Listener at localhost/43809] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@57b5bf85{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/hadoop.log.dir/,AVAILABLE} 2023-07-21 18:15:00,219 INFO [Listener at localhost/43809] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:15:00,220 INFO [Listener at localhost/43809] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@61dca119{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 18:15:00,332 INFO [Listener at localhost/43809] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 18:15:00,333 INFO [Listener at localhost/43809] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 18:15:00,333 INFO [Listener at localhost/43809] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 18:15:00,333 INFO [Listener at localhost/43809] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 18:15:00,334 INFO [Listener at localhost/43809] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:15:00,335 INFO [Listener at localhost/43809] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@74bbe23e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/java.io.tmpdir/jetty-0_0_0_0-42529-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5890349141581313618/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 18:15:00,336 INFO [Listener at localhost/43809] server.AbstractConnector(333): Started ServerConnector@688c684{HTTP/1.1, (http/1.1)}{0.0.0.0:42529} 2023-07-21 18:15:00,337 INFO [Listener at localhost/43809] server.Server(415): Started @44470ms 2023-07-21 18:15:00,338 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 18:15:00,341 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@7f773764{HTTP/1.1, (http/1.1)}{0.0.0.0:39465} 2023-07-21 18:15:00,341 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @44475ms 2023-07-21 18:15:00,341 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,34779,1689963299707 2023-07-21 18:15:00,343 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 18:15:00,344 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,34779,1689963299707 2023-07-21 18:15:00,347 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 18:15:00,347 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:34959-0x101891806670002, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 18:15:00,347 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:15:00,349 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 18:15:00,350 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:44645-0x101891806670003, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 18:15:00,351 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:45925-0x101891806670001, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 18:15:00,351 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,34779,1689963299707 from backup master directory 2023-07-21 18:15:00,352 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 18:15:00,353 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,34779,1689963299707 2023-07-21 18:15:00,353 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 18:15:00,353 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 18:15:00,354 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,34779,1689963299707 2023-07-21 18:15:00,411 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/hbase.id with ID: a9db069a-2d02-43ed-ba23-8983464a04ce 2023-07-21 18:15:00,433 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:15:00,437 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:15:00,474 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x29ab8974 to 127.0.0.1:60536 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 18:15:00,480 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@320eb392, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 18:15:00,481 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 18:15:00,481 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-21 18:15:00,481 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 18:15:00,483 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/MasterData/data/master/store-tmp 2023-07-21 18:15:00,498 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:15:00,498 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 18:15:00,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 18:15:00,498 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 18:15:00,498 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 18:15:00,498 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 18:15:00,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 18:15:00,498 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 18:15:00,499 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/MasterData/WALs/jenkins-hbase4.apache.org,34779,1689963299707 2023-07-21 18:15:00,501 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34779%2C1689963299707, suffix=, logDir=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/MasterData/WALs/jenkins-hbase4.apache.org,34779,1689963299707, archiveDir=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/MasterData/oldWALs, maxLogs=10 2023-07-21 18:15:00,517 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36115,DS-b045540e-f2f7-4c13-9b54-019e5d307dcf,DISK] 2023-07-21 18:15:00,519 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42999,DS-fbf654ae-10e5-4a82-91b9-5bfa1822d1e5,DISK] 2023-07-21 18:15:00,519 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35465,DS-b1f46eb7-7fc9-4530-99e9-bb8bffad3df7,DISK] 2023-07-21 18:15:00,525 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/MasterData/WALs/jenkins-hbase4.apache.org,34779,1689963299707/jenkins-hbase4.apache.org%2C34779%2C1689963299707.1689963300501 2023-07-21 18:15:00,526 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36115,DS-b045540e-f2f7-4c13-9b54-019e5d307dcf,DISK], DatanodeInfoWithStorage[127.0.0.1:42999,DS-fbf654ae-10e5-4a82-91b9-5bfa1822d1e5,DISK], DatanodeInfoWithStorage[127.0.0.1:35465,DS-b1f46eb7-7fc9-4530-99e9-bb8bffad3df7,DISK]] 2023-07-21 18:15:00,526 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:15:00,526 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:15:00,526 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 18:15:00,526 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 18:15:00,532 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-21 18:15:00,533 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-21 18:15:00,534 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-21 18:15:00,534 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:15:00,535 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 18:15:00,535 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 18:15:00,538 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 18:15:00,540 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:15:00,540 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10249746880, jitterRate=-0.04541793465614319}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:15:00,540 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 18:15:00,542 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-21 18:15:00,544 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-21 18:15:00,544 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-21 18:15:00,544 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-21 18:15:00,545 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-21 18:15:00,545 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-21 18:15:00,545 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-21 18:15:00,549 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-21 18:15:00,550 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-21 18:15:00,551 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-21 18:15:00,551 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-21 18:15:00,551 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-21 18:15:00,556 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:15:00,556 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-21 18:15:00,557 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-21 18:15:00,557 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-21 18:15:00,559 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 18:15:00,559 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:34959-0x101891806670002, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 18:15:00,559 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:15:00,559 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:44645-0x101891806670003, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 18:15:00,559 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:45925-0x101891806670001, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 18:15:00,562 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,34779,1689963299707, sessionid=0x101891806670000, setting cluster-up flag (Was=false) 2023-07-21 18:15:00,564 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:15:00,574 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-21 18:15:00,579 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34779,1689963299707 2023-07-21 18:15:00,584 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:15:00,588 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-21 18:15:00,589 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34779,1689963299707 2023-07-21 18:15:00,589 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/.hbase-snapshot/.tmp 2023-07-21 18:15:00,595 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-21 18:15:00,595 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-21 18:15:00,596 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-21 18:15:00,596 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34779,1689963299707] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 18:15:00,597 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-21 18:15:00,598 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-21 18:15:00,614 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 18:15:00,614 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 18:15:00,614 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 18:15:00,614 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 18:15:00,614 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 18:15:00,615 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 18:15:00,615 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 18:15:00,615 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 18:15:00,615 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-21 18:15:00,615 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,615 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 18:15:00,615 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,633 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689963330633 2023-07-21 18:15:00,634 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-21 18:15:00,634 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-21 18:15:00,634 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-21 18:15:00,634 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-21 18:15:00,634 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-21 18:15:00,634 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-21 18:15:00,635 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:00,636 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-21 18:15:00,637 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-21 18:15:00,637 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-21 18:15:00,637 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 18:15:00,637 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-21 18:15:00,638 INFO [RS:0;jenkins-hbase4:45925] regionserver.HRegionServer(951): ClusterId : a9db069a-2d02-43ed-ba23-8983464a04ce 2023-07-21 18:15:00,639 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-21 18:15:00,640 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-21 18:15:00,640 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689963300640,5,FailOnTimeoutGroup] 2023-07-21 18:15:00,640 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689963300640,5,FailOnTimeoutGroup] 2023-07-21 18:15:00,640 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:00,640 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-21 18:15:00,640 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:00,640 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:00,641 INFO [RS:2;jenkins-hbase4:44645] regionserver.HRegionServer(951): ClusterId : a9db069a-2d02-43ed-ba23-8983464a04ce 2023-07-21 18:15:00,641 INFO [RS:1;jenkins-hbase4:34959] regionserver.HRegionServer(951): ClusterId : a9db069a-2d02-43ed-ba23-8983464a04ce 2023-07-21 18:15:00,643 DEBUG [RS:0;jenkins-hbase4:45925] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 18:15:00,643 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 18:15:00,647 DEBUG [RS:2;jenkins-hbase4:44645] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 18:15:00,647 DEBUG [RS:1;jenkins-hbase4:34959] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 18:15:00,651 DEBUG [RS:1;jenkins-hbase4:34959] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 18:15:00,651 DEBUG [RS:1;jenkins-hbase4:34959] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 18:15:00,652 DEBUG [RS:0;jenkins-hbase4:45925] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 18:15:00,653 DEBUG [RS:0;jenkins-hbase4:45925] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 18:15:00,653 DEBUG [RS:2;jenkins-hbase4:44645] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 18:15:00,653 DEBUG [RS:2;jenkins-hbase4:44645] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 18:15:00,655 DEBUG [RS:1;jenkins-hbase4:34959] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 18:15:00,656 DEBUG [RS:1;jenkins-hbase4:34959] zookeeper.ReadOnlyZKClient(139): Connect 0x6b99e8dc to 127.0.0.1:60536 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 18:15:00,657 DEBUG [RS:0;jenkins-hbase4:45925] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 18:15:00,663 DEBUG [RS:0;jenkins-hbase4:45925] zookeeper.ReadOnlyZKClient(139): Connect 0x6642d7a7 to 127.0.0.1:60536 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 18:15:00,667 DEBUG [RS:2;jenkins-hbase4:44645] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 18:15:00,674 DEBUG [RS:2;jenkins-hbase4:44645] zookeeper.ReadOnlyZKClient(139): Connect 0x6e34bb1b to 127.0.0.1:60536 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 18:15:00,692 DEBUG [RS:1;jenkins-hbase4:34959] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@655e0bdf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 18:15:00,692 DEBUG [RS:1;jenkins-hbase4:34959] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@616183ac, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 18:15:00,698 DEBUG [RS:0;jenkins-hbase4:45925] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@627d5fb7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 18:15:00,698 DEBUG [RS:0;jenkins-hbase4:45925] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@277fdd9c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 18:15:00,706 DEBUG [RS:2;jenkins-hbase4:44645] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2b3638b9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 18:15:00,706 DEBUG [RS:2;jenkins-hbase4:44645] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@29fbfa3c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 18:15:00,714 DEBUG [RS:0;jenkins-hbase4:45925] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:45925 2023-07-21 18:15:00,714 INFO [RS:0;jenkins-hbase4:45925] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 18:15:00,714 INFO [RS:0;jenkins-hbase4:45925] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 18:15:00,714 DEBUG [RS:0;jenkins-hbase4:45925] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 18:15:00,714 DEBUG [RS:1;jenkins-hbase4:34959] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:34959 2023-07-21 18:15:00,714 INFO [RS:1;jenkins-hbase4:34959] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 18:15:00,714 INFO [RS:1;jenkins-hbase4:34959] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 18:15:00,714 DEBUG [RS:1;jenkins-hbase4:34959] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 18:15:00,714 INFO [RS:0;jenkins-hbase4:45925] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34779,1689963299707 with isa=jenkins-hbase4.apache.org/172.31.14.131:45925, startcode=1689963299874 2023-07-21 18:15:00,715 DEBUG [RS:0;jenkins-hbase4:45925] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 18:15:00,717 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 18:15:00,718 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 18:15:00,718 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1 2023-07-21 18:15:00,719 DEBUG [RS:2;jenkins-hbase4:44645] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:44645 2023-07-21 18:15:00,719 INFO [RS:2;jenkins-hbase4:44645] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 18:15:00,719 INFO [RS:2;jenkins-hbase4:44645] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 18:15:00,719 DEBUG [RS:2;jenkins-hbase4:44645] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 18:15:00,720 INFO [RS:1;jenkins-hbase4:34959] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34779,1689963299707 with isa=jenkins-hbase4.apache.org/172.31.14.131:34959, startcode=1689963300038 2023-07-21 18:15:00,720 DEBUG [RS:1;jenkins-hbase4:34959] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 18:15:00,720 INFO [RS:2;jenkins-hbase4:44645] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34779,1689963299707 with isa=jenkins-hbase4.apache.org/172.31.14.131:44645, startcode=1689963300199 2023-07-21 18:15:00,720 DEBUG [RS:2;jenkins-hbase4:44645] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 18:15:00,723 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41237, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 18:15:00,728 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57725, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 18:15:00,728 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50513, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 18:15:00,732 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34779] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,45925,1689963299874 2023-07-21 18:15:00,732 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34779,1689963299707] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 18:15:00,733 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34779,1689963299707] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-21 18:15:00,734 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34779] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44645,1689963300199 2023-07-21 18:15:00,735 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34779] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34959,1689963300038 2023-07-21 18:15:00,735 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34779,1689963299707] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 18:15:00,735 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34779,1689963299707] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-21 18:15:00,735 DEBUG [RS:2;jenkins-hbase4:44645] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1 2023-07-21 18:15:00,735 DEBUG [RS:1;jenkins-hbase4:34959] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1 2023-07-21 18:15:00,735 DEBUG [RS:2;jenkins-hbase4:44645] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34709 2023-07-21 18:15:00,736 DEBUG [RS:2;jenkins-hbase4:44645] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39617 2023-07-21 18:15:00,737 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:15:00,737 DEBUG [RS:2;jenkins-hbase4:44645] zookeeper.ZKUtil(162): regionserver:44645-0x101891806670003, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44645,1689963300199 2023-07-21 18:15:00,737 WARN [RS:2;jenkins-hbase4:44645] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 18:15:00,738 INFO [RS:2;jenkins-hbase4:44645] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 18:15:00,738 DEBUG [RS:2;jenkins-hbase4:44645] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/WALs/jenkins-hbase4.apache.org,44645,1689963300199 2023-07-21 18:15:00,735 DEBUG [RS:1;jenkins-hbase4:34959] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34709 2023-07-21 18:15:00,738 DEBUG [RS:1;jenkins-hbase4:34959] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39617 2023-07-21 18:15:00,742 DEBUG [RS:0;jenkins-hbase4:45925] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1 2023-07-21 18:15:00,742 DEBUG [RS:0;jenkins-hbase4:45925] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34709 2023-07-21 18:15:00,743 DEBUG [RS:0;jenkins-hbase4:45925] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39617 2023-07-21 18:15:00,745 DEBUG [RS:1;jenkins-hbase4:34959] zookeeper.ZKUtil(162): regionserver:34959-0x101891806670002, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34959,1689963300038 2023-07-21 18:15:00,745 WARN [RS:1;jenkins-hbase4:34959] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 18:15:00,745 INFO [RS:1;jenkins-hbase4:34959] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 18:15:00,746 DEBUG [RS:0;jenkins-hbase4:45925] zookeeper.ZKUtil(162): regionserver:45925-0x101891806670001, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45925,1689963299874 2023-07-21 18:15:00,746 WARN [RS:0;jenkins-hbase4:45925] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 18:15:00,746 INFO [RS:0;jenkins-hbase4:45925] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 18:15:00,746 DEBUG [RS:1;jenkins-hbase4:34959] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/WALs/jenkins-hbase4.apache.org,34959,1689963300038 2023-07-21 18:15:00,746 DEBUG [RS:0;jenkins-hbase4:45925] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/WALs/jenkins-hbase4.apache.org,45925,1689963299874 2023-07-21 18:15:00,769 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,45925,1689963299874] 2023-07-21 18:15:00,769 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44645,1689963300199] 2023-07-21 18:15:00,769 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34959,1689963300038] 2023-07-21 18:15:00,772 DEBUG [RS:1;jenkins-hbase4:34959] zookeeper.ZKUtil(162): regionserver:34959-0x101891806670002, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34959,1689963300038 2023-07-21 18:15:00,773 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:15:00,774 DEBUG [RS:2;jenkins-hbase4:44645] zookeeper.ZKUtil(162): regionserver:44645-0x101891806670003, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34959,1689963300038 2023-07-21 18:15:00,774 DEBUG [RS:1;jenkins-hbase4:34959] zookeeper.ZKUtil(162): regionserver:34959-0x101891806670002, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45925,1689963299874 2023-07-21 18:15:00,774 DEBUG [RS:2;jenkins-hbase4:44645] zookeeper.ZKUtil(162): regionserver:44645-0x101891806670003, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45925,1689963299874 2023-07-21 18:15:00,775 DEBUG [RS:1;jenkins-hbase4:34959] zookeeper.ZKUtil(162): regionserver:34959-0x101891806670002, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44645,1689963300199 2023-07-21 18:15:00,775 DEBUG [RS:0;jenkins-hbase4:45925] zookeeper.ZKUtil(162): regionserver:45925-0x101891806670001, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34959,1689963300038 2023-07-21 18:15:00,775 DEBUG [RS:2;jenkins-hbase4:44645] zookeeper.ZKUtil(162): regionserver:44645-0x101891806670003, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44645,1689963300199 2023-07-21 18:15:00,776 DEBUG [RS:1;jenkins-hbase4:34959] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 18:15:00,776 INFO [RS:1;jenkins-hbase4:34959] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 18:15:00,776 DEBUG [RS:0;jenkins-hbase4:45925] zookeeper.ZKUtil(162): regionserver:45925-0x101891806670001, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45925,1689963299874 2023-07-21 18:15:00,777 DEBUG [RS:0;jenkins-hbase4:45925] zookeeper.ZKUtil(162): regionserver:45925-0x101891806670001, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44645,1689963300199 2023-07-21 18:15:00,777 DEBUG [RS:2;jenkins-hbase4:44645] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 18:15:00,777 INFO [RS:2;jenkins-hbase4:44645] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 18:15:00,778 DEBUG [RS:0;jenkins-hbase4:45925] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 18:15:00,778 INFO [RS:0;jenkins-hbase4:45925] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 18:15:00,783 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 18:15:00,788 INFO [RS:0;jenkins-hbase4:45925] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 18:15:00,789 INFO [RS:0;jenkins-hbase4:45925] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 18:15:00,789 INFO [RS:0;jenkins-hbase4:45925] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:00,789 INFO [RS:0;jenkins-hbase4:45925] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 18:15:00,790 INFO [RS:2;jenkins-hbase4:44645] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 18:15:00,791 INFO [RS:1;jenkins-hbase4:34959] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 18:15:00,791 INFO [RS:0;jenkins-hbase4:45925] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:00,796 DEBUG [RS:0;jenkins-hbase4:45925] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,796 DEBUG [RS:0;jenkins-hbase4:45925] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,796 DEBUG [RS:0;jenkins-hbase4:45925] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,796 DEBUG [RS:0;jenkins-hbase4:45925] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,796 DEBUG [RS:0;jenkins-hbase4:45925] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,796 DEBUG [RS:0;jenkins-hbase4:45925] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 18:15:00,796 DEBUG [RS:0;jenkins-hbase4:45925] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,796 DEBUG [RS:0;jenkins-hbase4:45925] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,796 DEBUG [RS:0;jenkins-hbase4:45925] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,796 DEBUG [RS:0;jenkins-hbase4:45925] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,797 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740/info 2023-07-21 18:15:00,797 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 18:15:00,798 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:15:00,798 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 18:15:00,800 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740/rep_barrier 2023-07-21 18:15:00,801 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 18:15:00,801 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:15:00,802 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 18:15:00,803 INFO [RS:1;jenkins-hbase4:34959] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 18:15:00,803 INFO [RS:1;jenkins-hbase4:34959] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:00,803 INFO [RS:2;jenkins-hbase4:44645] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 18:15:00,803 INFO [RS:2;jenkins-hbase4:44645] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:00,803 INFO [RS:1;jenkins-hbase4:34959] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 18:15:00,803 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740/table 2023-07-21 18:15:00,804 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 18:15:00,804 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:15:00,806 INFO [RS:2;jenkins-hbase4:44645] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 18:15:00,816 INFO [RS:0;jenkins-hbase4:45925] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:00,816 INFO [RS:0;jenkins-hbase4:45925] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:00,816 INFO [RS:0;jenkins-hbase4:45925] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:00,817 INFO [RS:2;jenkins-hbase4:44645] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:00,817 DEBUG [RS:2;jenkins-hbase4:44645] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,817 DEBUG [RS:2;jenkins-hbase4:44645] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,817 DEBUG [RS:2;jenkins-hbase4:44645] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,817 DEBUG [RS:2;jenkins-hbase4:44645] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,817 DEBUG [RS:2;jenkins-hbase4:44645] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,817 DEBUG [RS:2;jenkins-hbase4:44645] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 18:15:00,817 DEBUG [RS:2;jenkins-hbase4:44645] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,818 DEBUG [RS:2;jenkins-hbase4:44645] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,818 DEBUG [RS:2;jenkins-hbase4:44645] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,818 DEBUG [RS:2;jenkins-hbase4:44645] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,818 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740 2023-07-21 18:15:00,819 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740 2023-07-21 18:15:00,822 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 18:15:00,822 INFO [RS:2;jenkins-hbase4:44645] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:00,825 INFO [RS:2;jenkins-hbase4:44645] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:00,825 INFO [RS:2;jenkins-hbase4:44645] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:00,826 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 18:15:00,844 INFO [RS:0;jenkins-hbase4:45925] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 18:15:00,844 INFO [RS:0;jenkins-hbase4:45925] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45925,1689963299874-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:00,846 INFO [RS:2;jenkins-hbase4:44645] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 18:15:00,846 INFO [RS:2;jenkins-hbase4:44645] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44645,1689963300199-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:00,846 INFO [RS:1;jenkins-hbase4:34959] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:00,847 DEBUG [RS:1;jenkins-hbase4:34959] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,848 DEBUG [RS:1;jenkins-hbase4:34959] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,848 DEBUG [RS:1;jenkins-hbase4:34959] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,848 DEBUG [RS:1;jenkins-hbase4:34959] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,848 DEBUG [RS:1;jenkins-hbase4:34959] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,848 DEBUG [RS:1;jenkins-hbase4:34959] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 18:15:00,848 DEBUG [RS:1;jenkins-hbase4:34959] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,849 DEBUG [RS:1;jenkins-hbase4:34959] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,849 DEBUG [RS:1;jenkins-hbase4:34959] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,849 DEBUG [RS:1;jenkins-hbase4:34959] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:00,863 INFO [RS:1;jenkins-hbase4:34959] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:00,863 INFO [RS:1;jenkins-hbase4:34959] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:00,863 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:15:00,863 INFO [RS:1;jenkins-hbase4:34959] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:00,864 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10430308960, jitterRate=-0.02860178053379059}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 18:15:00,864 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 18:15:00,864 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 18:15:00,864 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 18:15:00,864 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 18:15:00,864 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 18:15:00,864 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 18:15:00,878 INFO [RS:2;jenkins-hbase4:44645] regionserver.Replication(203): jenkins-hbase4.apache.org,44645,1689963300199 started 2023-07-21 18:15:00,879 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 18:15:00,879 INFO [RS:2;jenkins-hbase4:44645] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44645,1689963300199, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44645, sessionid=0x101891806670003 2023-07-21 18:15:00,879 INFO [RS:0;jenkins-hbase4:45925] regionserver.Replication(203): jenkins-hbase4.apache.org,45925,1689963299874 started 2023-07-21 18:15:00,879 DEBUG [RS:2;jenkins-hbase4:44645] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 18:15:00,879 DEBUG [RS:2;jenkins-hbase4:44645] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44645,1689963300199 2023-07-21 18:15:00,879 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 18:15:00,879 DEBUG [RS:2;jenkins-hbase4:44645] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44645,1689963300199' 2023-07-21 18:15:00,879 DEBUG [RS:2;jenkins-hbase4:44645] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 18:15:00,879 INFO [RS:0;jenkins-hbase4:45925] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,45925,1689963299874, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:45925, sessionid=0x101891806670001 2023-07-21 18:15:00,879 DEBUG [RS:0;jenkins-hbase4:45925] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 18:15:00,879 DEBUG [RS:0;jenkins-hbase4:45925] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,45925,1689963299874 2023-07-21 18:15:00,879 DEBUG [RS:0;jenkins-hbase4:45925] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45925,1689963299874' 2023-07-21 18:15:00,879 DEBUG [RS:0;jenkins-hbase4:45925] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 18:15:00,880 DEBUG [RS:0;jenkins-hbase4:45925] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 18:15:00,880 DEBUG [RS:2;jenkins-hbase4:44645] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 18:15:00,880 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 18:15:00,880 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-21 18:15:00,880 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-21 18:15:00,881 DEBUG [RS:0;jenkins-hbase4:45925] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 18:15:00,881 DEBUG [RS:0;jenkins-hbase4:45925] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 18:15:00,881 INFO [RS:1;jenkins-hbase4:34959] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 18:15:00,881 DEBUG [RS:2;jenkins-hbase4:44645] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 18:15:00,881 INFO [RS:1;jenkins-hbase4:34959] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34959,1689963300038-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:00,881 DEBUG [RS:0;jenkins-hbase4:45925] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,45925,1689963299874 2023-07-21 18:15:00,881 DEBUG [RS:0;jenkins-hbase4:45925] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45925,1689963299874' 2023-07-21 18:15:00,881 DEBUG [RS:0;jenkins-hbase4:45925] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 18:15:00,881 DEBUG [RS:2;jenkins-hbase4:44645] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 18:15:00,881 DEBUG [RS:2;jenkins-hbase4:44645] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44645,1689963300199 2023-07-21 18:15:00,881 DEBUG [RS:2;jenkins-hbase4:44645] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44645,1689963300199' 2023-07-21 18:15:00,882 DEBUG [RS:2;jenkins-hbase4:44645] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 18:15:00,882 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-21 18:15:00,882 DEBUG [RS:0;jenkins-hbase4:45925] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 18:15:00,888 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-21 18:15:00,889 DEBUG [RS:0;jenkins-hbase4:45925] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 18:15:00,889 INFO [RS:0;jenkins-hbase4:45925] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 18:15:00,889 INFO [RS:0;jenkins-hbase4:45925] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 18:15:00,900 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-21 18:15:00,901 DEBUG [RS:2;jenkins-hbase4:44645] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 18:15:00,903 DEBUG [RS:2;jenkins-hbase4:44645] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 18:15:00,903 INFO [RS:2;jenkins-hbase4:44645] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 18:15:00,903 INFO [RS:2;jenkins-hbase4:44645] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 18:15:00,923 INFO [RS:1;jenkins-hbase4:34959] regionserver.Replication(203): jenkins-hbase4.apache.org,34959,1689963300038 started 2023-07-21 18:15:00,923 INFO [RS:1;jenkins-hbase4:34959] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34959,1689963300038, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34959, sessionid=0x101891806670002 2023-07-21 18:15:00,924 DEBUG [RS:1;jenkins-hbase4:34959] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 18:15:00,924 DEBUG [RS:1;jenkins-hbase4:34959] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34959,1689963300038 2023-07-21 18:15:00,924 DEBUG [RS:1;jenkins-hbase4:34959] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34959,1689963300038' 2023-07-21 18:15:00,924 DEBUG [RS:1;jenkins-hbase4:34959] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 18:15:00,925 DEBUG [RS:1;jenkins-hbase4:34959] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 18:15:00,925 DEBUG [RS:1;jenkins-hbase4:34959] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 18:15:00,926 DEBUG [RS:1;jenkins-hbase4:34959] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 18:15:00,926 DEBUG [RS:1;jenkins-hbase4:34959] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34959,1689963300038 2023-07-21 18:15:00,926 DEBUG [RS:1;jenkins-hbase4:34959] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34959,1689963300038' 2023-07-21 18:15:00,926 DEBUG [RS:1;jenkins-hbase4:34959] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 18:15:00,927 DEBUG [RS:1;jenkins-hbase4:34959] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 18:15:00,929 DEBUG [RS:1;jenkins-hbase4:34959] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 18:15:00,930 INFO [RS:1;jenkins-hbase4:34959] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 18:15:00,930 INFO [RS:1;jenkins-hbase4:34959] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 18:15:00,991 INFO [RS:0;jenkins-hbase4:45925] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45925%2C1689963299874, suffix=, logDir=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/WALs/jenkins-hbase4.apache.org,45925,1689963299874, archiveDir=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/oldWALs, maxLogs=32 2023-07-21 18:15:01,009 INFO [RS:2;jenkins-hbase4:44645] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44645%2C1689963300199, suffix=, logDir=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/WALs/jenkins-hbase4.apache.org,44645,1689963300199, archiveDir=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/oldWALs, maxLogs=32 2023-07-21 18:15:01,015 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36115,DS-b045540e-f2f7-4c13-9b54-019e5d307dcf,DISK] 2023-07-21 18:15:01,015 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42999,DS-fbf654ae-10e5-4a82-91b9-5bfa1822d1e5,DISK] 2023-07-21 18:15:01,015 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35465,DS-b1f46eb7-7fc9-4530-99e9-bb8bffad3df7,DISK] 2023-07-21 18:15:01,023 INFO [RS:0;jenkins-hbase4:45925] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/WALs/jenkins-hbase4.apache.org,45925,1689963299874/jenkins-hbase4.apache.org%2C45925%2C1689963299874.1689963300992 2023-07-21 18:15:01,023 DEBUG [RS:0;jenkins-hbase4:45925] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36115,DS-b045540e-f2f7-4c13-9b54-019e5d307dcf,DISK], DatanodeInfoWithStorage[127.0.0.1:35465,DS-b1f46eb7-7fc9-4530-99e9-bb8bffad3df7,DISK], DatanodeInfoWithStorage[127.0.0.1:42999,DS-fbf654ae-10e5-4a82-91b9-5bfa1822d1e5,DISK]] 2023-07-21 18:15:01,035 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35465,DS-b1f46eb7-7fc9-4530-99e9-bb8bffad3df7,DISK] 2023-07-21 18:15:01,035 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42999,DS-fbf654ae-10e5-4a82-91b9-5bfa1822d1e5,DISK] 2023-07-21 18:15:01,035 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36115,DS-b045540e-f2f7-4c13-9b54-019e5d307dcf,DISK] 2023-07-21 18:15:01,036 INFO [RS:1;jenkins-hbase4:34959] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34959%2C1689963300038, suffix=, logDir=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/WALs/jenkins-hbase4.apache.org,34959,1689963300038, archiveDir=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/oldWALs, maxLogs=32 2023-07-21 18:15:01,043 INFO [RS:2;jenkins-hbase4:44645] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/WALs/jenkins-hbase4.apache.org,44645,1689963300199/jenkins-hbase4.apache.org%2C44645%2C1689963300199.1689963301009 2023-07-21 18:15:01,043 DEBUG [RS:2;jenkins-hbase4:44645] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36115,DS-b045540e-f2f7-4c13-9b54-019e5d307dcf,DISK], DatanodeInfoWithStorage[127.0.0.1:35465,DS-b1f46eb7-7fc9-4530-99e9-bb8bffad3df7,DISK], DatanodeInfoWithStorage[127.0.0.1:42999,DS-fbf654ae-10e5-4a82-91b9-5bfa1822d1e5,DISK]] 2023-07-21 18:15:01,051 DEBUG [jenkins-hbase4:34779] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 18:15:01,051 DEBUG [jenkins-hbase4:34779] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 18:15:01,052 DEBUG [jenkins-hbase4:34779] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 18:15:01,052 DEBUG [jenkins-hbase4:34779] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 18:15:01,052 DEBUG [jenkins-hbase4:34779] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 18:15:01,053 DEBUG [jenkins-hbase4:34779] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 18:15:01,054 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44645,1689963300199, state=OPENING 2023-07-21 18:15:01,054 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42999,DS-fbf654ae-10e5-4a82-91b9-5bfa1822d1e5,DISK] 2023-07-21 18:15:01,054 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36115,DS-b045540e-f2f7-4c13-9b54-019e5d307dcf,DISK] 2023-07-21 18:15:01,054 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35465,DS-b1f46eb7-7fc9-4530-99e9-bb8bffad3df7,DISK] 2023-07-21 18:15:01,055 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-21 18:15:01,056 INFO [RS:1;jenkins-hbase4:34959] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/WALs/jenkins-hbase4.apache.org,34959,1689963300038/jenkins-hbase4.apache.org%2C34959%2C1689963300038.1689963301036 2023-07-21 18:15:01,057 DEBUG [RS:1;jenkins-hbase4:34959] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35465,DS-b1f46eb7-7fc9-4530-99e9-bb8bffad3df7,DISK], DatanodeInfoWithStorage[127.0.0.1:42999,DS-fbf654ae-10e5-4a82-91b9-5bfa1822d1e5,DISK], DatanodeInfoWithStorage[127.0.0.1:36115,DS-b045540e-f2f7-4c13-9b54-019e5d307dcf,DISK]] 2023-07-21 18:15:01,062 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:15:01,063 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44645,1689963300199}] 2023-07-21 18:15:01,063 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 18:15:01,213 WARN [ReadOnlyZKClient-127.0.0.1:60536@0x29ab8974] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-21 18:15:01,213 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34779,1689963299707] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 18:15:01,215 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37100, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 18:15:01,216 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44645] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:37100 deadline: 1689963361215, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,44645,1689963300199 2023-07-21 18:15:01,218 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44645,1689963300199 2023-07-21 18:15:01,219 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 18:15:01,221 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37108, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 18:15:01,224 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 18:15:01,224 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 18:15:01,226 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44645%2C1689963300199.meta, suffix=.meta, logDir=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/WALs/jenkins-hbase4.apache.org,44645,1689963300199, archiveDir=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/oldWALs, maxLogs=32 2023-07-21 18:15:01,246 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42999,DS-fbf654ae-10e5-4a82-91b9-5bfa1822d1e5,DISK] 2023-07-21 18:15:01,246 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36115,DS-b045540e-f2f7-4c13-9b54-019e5d307dcf,DISK] 2023-07-21 18:15:01,247 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35465,DS-b1f46eb7-7fc9-4530-99e9-bb8bffad3df7,DISK] 2023-07-21 18:15:01,253 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/WALs/jenkins-hbase4.apache.org,44645,1689963300199/jenkins-hbase4.apache.org%2C44645%2C1689963300199.meta.1689963301226.meta 2023-07-21 18:15:01,253 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42999,DS-fbf654ae-10e5-4a82-91b9-5bfa1822d1e5,DISK], DatanodeInfoWithStorage[127.0.0.1:36115,DS-b045540e-f2f7-4c13-9b54-019e5d307dcf,DISK], DatanodeInfoWithStorage[127.0.0.1:35465,DS-b1f46eb7-7fc9-4530-99e9-bb8bffad3df7,DISK]] 2023-07-21 18:15:01,253 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:15:01,253 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 18:15:01,253 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 18:15:01,254 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 18:15:01,254 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 18:15:01,254 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:15:01,254 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 18:15:01,254 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 18:15:01,255 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 18:15:01,256 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740/info 2023-07-21 18:15:01,256 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740/info 2023-07-21 18:15:01,256 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 18:15:01,257 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:15:01,257 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 18:15:01,258 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740/rep_barrier 2023-07-21 18:15:01,258 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740/rep_barrier 2023-07-21 18:15:01,258 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 18:15:01,259 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:15:01,259 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 18:15:01,259 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740/table 2023-07-21 18:15:01,260 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740/table 2023-07-21 18:15:01,260 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 18:15:01,260 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:15:01,261 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740 2023-07-21 18:15:01,262 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740 2023-07-21 18:15:01,264 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 18:15:01,266 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 18:15:01,267 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11209089600, jitterRate=0.04392781853675842}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 18:15:01,267 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 18:15:01,268 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689963301218 2023-07-21 18:15:01,273 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 18:15:01,273 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 18:15:01,275 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44645,1689963300199, state=OPEN 2023-07-21 18:15:01,276 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 18:15:01,276 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 18:15:01,278 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-21 18:15:01,278 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44645,1689963300199 in 213 msec 2023-07-21 18:15:01,280 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-21 18:15:01,280 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 398 msec 2023-07-21 18:15:01,281 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 683 msec 2023-07-21 18:15:01,281 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689963301281, completionTime=-1 2023-07-21 18:15:01,282 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-21 18:15:01,282 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-21 18:15:01,286 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-21 18:15:01,286 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689963361286 2023-07-21 18:15:01,286 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689963421286 2023-07-21 18:15:01,286 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-21 18:15:01,298 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34779,1689963299707-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:01,298 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34779,1689963299707-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:01,298 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34779,1689963299707-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:01,298 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:34779, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:01,298 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:01,298 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-21 18:15:01,298 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 18:15:01,299 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-21 18:15:01,299 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-21 18:15:01,300 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 18:15:01,301 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 18:15:01,303 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/.tmp/data/hbase/namespace/1debd766be32a30cd334b2093439b36e 2023-07-21 18:15:01,304 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/.tmp/data/hbase/namespace/1debd766be32a30cd334b2093439b36e empty. 2023-07-21 18:15:01,304 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/.tmp/data/hbase/namespace/1debd766be32a30cd334b2093439b36e 2023-07-21 18:15:01,304 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-21 18:15:01,319 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-21 18:15:01,320 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1debd766be32a30cd334b2093439b36e, NAME => 'hbase:namespace,,1689963301298.1debd766be32a30cd334b2093439b36e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/.tmp 2023-07-21 18:15:01,341 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689963301298.1debd766be32a30cd334b2093439b36e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:15:01,341 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 1debd766be32a30cd334b2093439b36e, disabling compactions & flushes 2023-07-21 18:15:01,341 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689963301298.1debd766be32a30cd334b2093439b36e. 2023-07-21 18:15:01,341 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689963301298.1debd766be32a30cd334b2093439b36e. 2023-07-21 18:15:01,341 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689963301298.1debd766be32a30cd334b2093439b36e. after waiting 0 ms 2023-07-21 18:15:01,341 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689963301298.1debd766be32a30cd334b2093439b36e. 2023-07-21 18:15:01,341 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689963301298.1debd766be32a30cd334b2093439b36e. 2023-07-21 18:15:01,341 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 1debd766be32a30cd334b2093439b36e: 2023-07-21 18:15:01,346 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 18:15:01,347 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689963301298.1debd766be32a30cd334b2093439b36e.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689963301347"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963301347"}]},"ts":"1689963301347"} 2023-07-21 18:15:01,350 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 18:15:01,351 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 18:15:01,351 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963301351"}]},"ts":"1689963301351"} 2023-07-21 18:15:01,352 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-21 18:15:01,355 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 18:15:01,355 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 18:15:01,355 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 18:15:01,355 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 18:15:01,355 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 18:15:01,355 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=1debd766be32a30cd334b2093439b36e, ASSIGN}] 2023-07-21 18:15:01,357 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=1debd766be32a30cd334b2093439b36e, ASSIGN 2023-07-21 18:15:01,357 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=1debd766be32a30cd334b2093439b36e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44645,1689963300199; forceNewPlan=false, retain=false 2023-07-21 18:15:01,508 INFO [jenkins-hbase4:34779] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 18:15:01,509 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=1debd766be32a30cd334b2093439b36e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44645,1689963300199 2023-07-21 18:15:01,509 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689963301298.1debd766be32a30cd334b2093439b36e.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689963301509"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963301509"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963301509"}]},"ts":"1689963301509"} 2023-07-21 18:15:01,511 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 1debd766be32a30cd334b2093439b36e, server=jenkins-hbase4.apache.org,44645,1689963300199}] 2023-07-21 18:15:01,669 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689963301298.1debd766be32a30cd334b2093439b36e. 2023-07-21 18:15:01,669 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1debd766be32a30cd334b2093439b36e, NAME => 'hbase:namespace,,1689963301298.1debd766be32a30cd334b2093439b36e.', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:15:01,669 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 1debd766be32a30cd334b2093439b36e 2023-07-21 18:15:01,670 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689963301298.1debd766be32a30cd334b2093439b36e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:15:01,670 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1debd766be32a30cd334b2093439b36e 2023-07-21 18:15:01,670 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1debd766be32a30cd334b2093439b36e 2023-07-21 18:15:01,671 INFO [StoreOpener-1debd766be32a30cd334b2093439b36e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1debd766be32a30cd334b2093439b36e 2023-07-21 18:15:01,672 DEBUG [StoreOpener-1debd766be32a30cd334b2093439b36e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/namespace/1debd766be32a30cd334b2093439b36e/info 2023-07-21 18:15:01,672 DEBUG [StoreOpener-1debd766be32a30cd334b2093439b36e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/namespace/1debd766be32a30cd334b2093439b36e/info 2023-07-21 18:15:01,673 INFO [StoreOpener-1debd766be32a30cd334b2093439b36e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1debd766be32a30cd334b2093439b36e columnFamilyName info 2023-07-21 18:15:01,673 INFO [StoreOpener-1debd766be32a30cd334b2093439b36e-1] regionserver.HStore(310): Store=1debd766be32a30cd334b2093439b36e/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:15:01,674 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/namespace/1debd766be32a30cd334b2093439b36e 2023-07-21 18:15:01,675 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/namespace/1debd766be32a30cd334b2093439b36e 2023-07-21 18:15:01,677 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1debd766be32a30cd334b2093439b36e 2023-07-21 18:15:01,680 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/namespace/1debd766be32a30cd334b2093439b36e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:15:01,681 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1debd766be32a30cd334b2093439b36e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9613551360, jitterRate=-0.1046682596206665}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:15:01,681 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1debd766be32a30cd334b2093439b36e: 2023-07-21 18:15:01,682 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689963301298.1debd766be32a30cd334b2093439b36e., pid=6, masterSystemTime=1689963301663 2023-07-21 18:15:01,685 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689963301298.1debd766be32a30cd334b2093439b36e. 2023-07-21 18:15:01,685 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689963301298.1debd766be32a30cd334b2093439b36e. 2023-07-21 18:15:01,686 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=1debd766be32a30cd334b2093439b36e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44645,1689963300199 2023-07-21 18:15:01,686 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689963301298.1debd766be32a30cd334b2093439b36e.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689963301685"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963301685"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963301685"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963301685"}]},"ts":"1689963301685"} 2023-07-21 18:15:01,688 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-21 18:15:01,689 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 1debd766be32a30cd334b2093439b36e, server=jenkins-hbase4.apache.org,44645,1689963300199 in 176 msec 2023-07-21 18:15:01,691 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-21 18:15:01,691 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=1debd766be32a30cd334b2093439b36e, ASSIGN in 334 msec 2023-07-21 18:15:01,692 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 18:15:01,692 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963301692"}]},"ts":"1689963301692"} 2023-07-21 18:15:01,694 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-21 18:15:01,700 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 18:15:01,701 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-21 18:15:01,702 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 402 msec 2023-07-21 18:15:01,702 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-21 18:15:01,702 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:15:01,707 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-21 18:15:01,718 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 18:15:01,721 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34779,1689963299707] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 18:15:01,722 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-07-21 18:15:01,723 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34779,1689963299707] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-21 18:15:01,724 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=8, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 18:15:01,729 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=8, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 18:15:01,730 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 18:15:01,731 DEBUG [PEWorker-2] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-21 18:15:01,731 DEBUG [PEWorker-2] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=9, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 18:15:01,732 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/.tmp/data/hbase/rsgroup/00677051c5de5ea202c03eabf9ca4d7a 2023-07-21 18:15:01,732 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/.tmp/data/hbase/rsgroup/00677051c5de5ea202c03eabf9ca4d7a empty. 2023-07-21 18:15:01,733 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/.tmp/data/hbase/rsgroup/00677051c5de5ea202c03eabf9ca4d7a 2023-07-21 18:15:01,733 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-21 18:15:01,755 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-21 18:15:01,757 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 00677051c5de5ea202c03eabf9ca4d7a, NAME => 'hbase:rsgroup,,1689963301721.00677051c5de5ea202c03eabf9ca4d7a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/.tmp 2023-07-21 18:15:01,769 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689963301721.00677051c5de5ea202c03eabf9ca4d7a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:15:01,769 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 00677051c5de5ea202c03eabf9ca4d7a, disabling compactions & flushes 2023-07-21 18:15:01,769 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689963301721.00677051c5de5ea202c03eabf9ca4d7a. 2023-07-21 18:15:01,769 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689963301721.00677051c5de5ea202c03eabf9ca4d7a. 2023-07-21 18:15:01,769 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689963301721.00677051c5de5ea202c03eabf9ca4d7a. after waiting 0 ms 2023-07-21 18:15:01,769 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689963301721.00677051c5de5ea202c03eabf9ca4d7a. 2023-07-21 18:15:01,769 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689963301721.00677051c5de5ea202c03eabf9ca4d7a. 2023-07-21 18:15:01,769 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 00677051c5de5ea202c03eabf9ca4d7a: 2023-07-21 18:15:01,775 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=8, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 18:15:01,776 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 18:15:01,776 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-21 18:15:01,776 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 18:15:01,776 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-21 18:15:01,777 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 18:15:01,777 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-21 18:15:01,781 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689963301721.00677051c5de5ea202c03eabf9ca4d7a.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689963301781"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963301781"}]},"ts":"1689963301781"} 2023-07-21 18:15:01,783 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 18:15:01,784 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=8, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 18:15:01,784 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963301784"}]},"ts":"1689963301784"} 2023-07-21 18:15:01,785 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-21 18:15:01,789 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 18:15:01,789 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 18:15:01,789 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 18:15:01,789 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 18:15:01,789 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 18:15:01,789 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=8, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=00677051c5de5ea202c03eabf9ca4d7a, ASSIGN}] 2023-07-21 18:15:01,791 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=8, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=00677051c5de5ea202c03eabf9ca4d7a, ASSIGN 2023-07-21 18:15:01,791 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=8, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=00677051c5de5ea202c03eabf9ca4d7a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34959,1689963300038; forceNewPlan=false, retain=false 2023-07-21 18:15:01,942 INFO [jenkins-hbase4:34779] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 18:15:01,943 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=00677051c5de5ea202c03eabf9ca4d7a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34959,1689963300038 2023-07-21 18:15:01,944 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689963301721.00677051c5de5ea202c03eabf9ca4d7a.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689963301943"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963301943"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963301943"}]},"ts":"1689963301943"} 2023-07-21 18:15:01,945 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 00677051c5de5ea202c03eabf9ca4d7a, server=jenkins-hbase4.apache.org,34959,1689963300038}] 2023-07-21 18:15:02,098 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34959,1689963300038 2023-07-21 18:15:02,098 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 18:15:02,100 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54064, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 18:15:02,104 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689963301721.00677051c5de5ea202c03eabf9ca4d7a. 2023-07-21 18:15:02,104 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 00677051c5de5ea202c03eabf9ca4d7a, NAME => 'hbase:rsgroup,,1689963301721.00677051c5de5ea202c03eabf9ca4d7a.', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:15:02,104 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 18:15:02,104 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689963301721.00677051c5de5ea202c03eabf9ca4d7a. service=MultiRowMutationService 2023-07-21 18:15:02,105 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 18:15:02,105 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 00677051c5de5ea202c03eabf9ca4d7a 2023-07-21 18:15:02,105 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689963301721.00677051c5de5ea202c03eabf9ca4d7a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:15:02,105 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 00677051c5de5ea202c03eabf9ca4d7a 2023-07-21 18:15:02,105 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 00677051c5de5ea202c03eabf9ca4d7a 2023-07-21 18:15:02,106 INFO [StoreOpener-00677051c5de5ea202c03eabf9ca4d7a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 00677051c5de5ea202c03eabf9ca4d7a 2023-07-21 18:15:02,108 DEBUG [StoreOpener-00677051c5de5ea202c03eabf9ca4d7a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/rsgroup/00677051c5de5ea202c03eabf9ca4d7a/m 2023-07-21 18:15:02,108 DEBUG [StoreOpener-00677051c5de5ea202c03eabf9ca4d7a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/rsgroup/00677051c5de5ea202c03eabf9ca4d7a/m 2023-07-21 18:15:02,108 INFO [StoreOpener-00677051c5de5ea202c03eabf9ca4d7a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 00677051c5de5ea202c03eabf9ca4d7a columnFamilyName m 2023-07-21 18:15:02,109 INFO [StoreOpener-00677051c5de5ea202c03eabf9ca4d7a-1] regionserver.HStore(310): Store=00677051c5de5ea202c03eabf9ca4d7a/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:15:02,109 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/rsgroup/00677051c5de5ea202c03eabf9ca4d7a 2023-07-21 18:15:02,110 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/rsgroup/00677051c5de5ea202c03eabf9ca4d7a 2023-07-21 18:15:02,113 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 00677051c5de5ea202c03eabf9ca4d7a 2023-07-21 18:15:02,115 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/rsgroup/00677051c5de5ea202c03eabf9ca4d7a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:15:02,116 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 00677051c5de5ea202c03eabf9ca4d7a; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@38070fbc, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:15:02,116 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 00677051c5de5ea202c03eabf9ca4d7a: 2023-07-21 18:15:02,116 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689963301721.00677051c5de5ea202c03eabf9ca4d7a., pid=11, masterSystemTime=1689963302098 2023-07-21 18:15:02,120 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689963301721.00677051c5de5ea202c03eabf9ca4d7a. 2023-07-21 18:15:02,121 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689963301721.00677051c5de5ea202c03eabf9ca4d7a. 2023-07-21 18:15:02,121 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=00677051c5de5ea202c03eabf9ca4d7a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34959,1689963300038 2023-07-21 18:15:02,121 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689963301721.00677051c5de5ea202c03eabf9ca4d7a.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689963302121"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963302121"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963302121"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963302121"}]},"ts":"1689963302121"} 2023-07-21 18:15:02,124 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-07-21 18:15:02,124 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 00677051c5de5ea202c03eabf9ca4d7a, server=jenkins-hbase4.apache.org,34959,1689963300038 in 178 msec 2023-07-21 18:15:02,126 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=8 2023-07-21 18:15:02,126 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=8, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=00677051c5de5ea202c03eabf9ca4d7a, ASSIGN in 335 msec 2023-07-21 18:15:02,137 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 18:15:02,141 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 411 msec 2023-07-21 18:15:02,141 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=8, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 18:15:02,141 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963302141"}]},"ts":"1689963302141"} 2023-07-21 18:15:02,143 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-21 18:15:02,145 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=8, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 18:15:02,146 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 424 msec 2023-07-21 18:15:02,147 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 18:15:02,150 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-21 18:15:02,150 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.796sec 2023-07-21 18:15:02,155 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-21 18:15:02,155 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-21 18:15:02,155 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-21 18:15:02,155 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34779,1689963299707-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-21 18:15:02,155 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34779,1689963299707-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-21 18:15:02,156 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-21 18:15:02,225 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34779,1689963299707] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 18:15:02,227 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54066, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 18:15:02,229 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34779,1689963299707] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-21 18:15:02,229 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34779,1689963299707] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-21 18:15:02,234 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:15:02,234 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34779,1689963299707] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:15:02,235 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34779,1689963299707] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 18:15:02,237 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34779,1689963299707] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-21 18:15:02,245 DEBUG [Listener at localhost/43809] zookeeper.ReadOnlyZKClient(139): Connect 0x3f8504c5 to 127.0.0.1:60536 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 18:15:02,251 DEBUG [Listener at localhost/43809] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@77dec720, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 18:15:02,252 DEBUG [hconnection-0x5c5f61f7-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 18:15:02,254 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37114, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 18:15:02,255 INFO [Listener at localhost/43809] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,34779,1689963299707 2023-07-21 18:15:02,255 INFO [Listener at localhost/43809] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:15:02,257 DEBUG [Listener at localhost/43809] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-21 18:15:02,258 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46000, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-21 18:15:02,261 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-21 18:15:02,261 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:15:02,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-21 18:15:02,262 DEBUG [Listener at localhost/43809] zookeeper.ReadOnlyZKClient(139): Connect 0x196bd676 to 127.0.0.1:60536 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 18:15:02,271 DEBUG [Listener at localhost/43809] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3e1c5c75, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 18:15:02,272 INFO [Listener at localhost/43809] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:60536 2023-07-21 18:15:02,275 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 18:15:02,276 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10189180667000a connected 2023-07-21 18:15:02,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:02,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:02,282 INFO [Listener at localhost/43809] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-21 18:15:02,296 INFO [Listener at localhost/43809] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 18:15:02,296 INFO [Listener at localhost/43809] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:15:02,296 INFO [Listener at localhost/43809] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 18:15:02,296 INFO [Listener at localhost/43809] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 18:15:02,296 INFO [Listener at localhost/43809] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 18:15:02,297 INFO [Listener at localhost/43809] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 18:15:02,297 INFO [Listener at localhost/43809] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 18:15:02,297 INFO [Listener at localhost/43809] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39687 2023-07-21 18:15:02,298 INFO [Listener at localhost/43809] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 18:15:02,299 DEBUG [Listener at localhost/43809] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 18:15:02,299 INFO [Listener at localhost/43809] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:15:02,300 INFO [Listener at localhost/43809] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 18:15:02,301 INFO [Listener at localhost/43809] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39687 connecting to ZooKeeper ensemble=127.0.0.1:60536 2023-07-21 18:15:02,304 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:396870x0, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 18:15:02,306 DEBUG [Listener at localhost/43809] zookeeper.ZKUtil(162): regionserver:396870x0, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 18:15:02,306 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39687-0x10189180667000b connected 2023-07-21 18:15:02,307 DEBUG [Listener at localhost/43809] zookeeper.ZKUtil(162): regionserver:39687-0x10189180667000b, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-21 18:15:02,307 DEBUG [Listener at localhost/43809] zookeeper.ZKUtil(164): regionserver:39687-0x10189180667000b, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 18:15:02,308 DEBUG [Listener at localhost/43809] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39687 2023-07-21 18:15:02,308 DEBUG [Listener at localhost/43809] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39687 2023-07-21 18:15:02,308 DEBUG [Listener at localhost/43809] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39687 2023-07-21 18:15:02,308 DEBUG [Listener at localhost/43809] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39687 2023-07-21 18:15:02,309 DEBUG [Listener at localhost/43809] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39687 2023-07-21 18:15:02,310 INFO [Listener at localhost/43809] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 18:15:02,310 INFO [Listener at localhost/43809] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 18:15:02,311 INFO [Listener at localhost/43809] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 18:15:02,311 INFO [Listener at localhost/43809] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 18:15:02,311 INFO [Listener at localhost/43809] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 18:15:02,311 INFO [Listener at localhost/43809] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 18:15:02,311 INFO [Listener at localhost/43809] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 18:15:02,312 INFO [Listener at localhost/43809] http.HttpServer(1146): Jetty bound to port 45393 2023-07-21 18:15:02,312 INFO [Listener at localhost/43809] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 18:15:02,313 INFO [Listener at localhost/43809] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:15:02,313 INFO [Listener at localhost/43809] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@57d9343d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/hadoop.log.dir/,AVAILABLE} 2023-07-21 18:15:02,313 INFO [Listener at localhost/43809] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:15:02,313 INFO [Listener at localhost/43809] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@29eacbd3{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 18:15:02,430 INFO [Listener at localhost/43809] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 18:15:02,431 INFO [Listener at localhost/43809] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 18:15:02,431 INFO [Listener at localhost/43809] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 18:15:02,431 INFO [Listener at localhost/43809] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 18:15:02,432 INFO [Listener at localhost/43809] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 18:15:02,433 INFO [Listener at localhost/43809] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@bac8428{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/java.io.tmpdir/jetty-0_0_0_0-45393-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2779025307231111406/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 18:15:02,434 INFO [Listener at localhost/43809] server.AbstractConnector(333): Started ServerConnector@1c41e666{HTTP/1.1, (http/1.1)}{0.0.0.0:45393} 2023-07-21 18:15:02,434 INFO [Listener at localhost/43809] server.Server(415): Started @46568ms 2023-07-21 18:15:02,437 INFO [RS:3;jenkins-hbase4:39687] regionserver.HRegionServer(951): ClusterId : a9db069a-2d02-43ed-ba23-8983464a04ce 2023-07-21 18:15:02,438 DEBUG [RS:3;jenkins-hbase4:39687] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 18:15:02,440 DEBUG [RS:3;jenkins-hbase4:39687] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 18:15:02,440 DEBUG [RS:3;jenkins-hbase4:39687] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 18:15:02,443 DEBUG [RS:3;jenkins-hbase4:39687] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 18:15:02,445 DEBUG [RS:3;jenkins-hbase4:39687] zookeeper.ReadOnlyZKClient(139): Connect 0x66006577 to 127.0.0.1:60536 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 18:15:02,450 DEBUG [RS:3;jenkins-hbase4:39687] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@68c797c7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 18:15:02,450 DEBUG [RS:3;jenkins-hbase4:39687] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6cf80e07, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 18:15:02,460 DEBUG [RS:3;jenkins-hbase4:39687] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:39687 2023-07-21 18:15:02,460 INFO [RS:3;jenkins-hbase4:39687] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 18:15:02,460 INFO [RS:3;jenkins-hbase4:39687] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 18:15:02,460 DEBUG [RS:3;jenkins-hbase4:39687] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 18:15:02,461 INFO [RS:3;jenkins-hbase4:39687] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34779,1689963299707 with isa=jenkins-hbase4.apache.org/172.31.14.131:39687, startcode=1689963302296 2023-07-21 18:15:02,461 DEBUG [RS:3;jenkins-hbase4:39687] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 18:15:02,464 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34807, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 18:15:02,464 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34779] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39687,1689963302296 2023-07-21 18:15:02,464 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34779,1689963299707] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 18:15:02,465 DEBUG [RS:3;jenkins-hbase4:39687] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1 2023-07-21 18:15:02,465 DEBUG [RS:3;jenkins-hbase4:39687] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34709 2023-07-21 18:15:02,465 DEBUG [RS:3;jenkins-hbase4:39687] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39617 2023-07-21 18:15:02,469 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:15:02,469 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:34959-0x101891806670002, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:15:02,469 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34779,1689963299707] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:15:02,469 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:44645-0x101891806670003, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:15:02,469 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:45925-0x101891806670001, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:15:02,469 DEBUG [RS:3;jenkins-hbase4:39687] zookeeper.ZKUtil(162): regionserver:39687-0x10189180667000b, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39687,1689963302296 2023-07-21 18:15:02,469 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39687,1689963302296] 2023-07-21 18:15:02,470 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34779,1689963299707] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 18:15:02,470 WARN [RS:3;jenkins-hbase4:39687] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 18:15:02,470 INFO [RS:3;jenkins-hbase4:39687] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 18:15:02,470 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34959-0x101891806670002, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34959,1689963300038 2023-07-21 18:15:02,470 DEBUG [RS:3;jenkins-hbase4:39687] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/WALs/jenkins-hbase4.apache.org,39687,1689963302296 2023-07-21 18:15:02,470 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44645-0x101891806670003, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34959,1689963300038 2023-07-21 18:15:02,470 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45925-0x101891806670001, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34959,1689963300038 2023-07-21 18:15:02,472 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44645-0x101891806670003, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45925,1689963299874 2023-07-21 18:15:02,472 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45925-0x101891806670001, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45925,1689963299874 2023-07-21 18:15:02,472 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34779,1689963299707] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-21 18:15:02,473 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44645-0x101891806670003, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44645,1689963300199 2023-07-21 18:15:02,473 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45925-0x101891806670001, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44645,1689963300199 2023-07-21 18:15:02,474 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44645-0x101891806670003, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39687,1689963302296 2023-07-21 18:15:02,474 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45925-0x101891806670001, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39687,1689963302296 2023-07-21 18:15:02,474 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34959-0x101891806670002, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45925,1689963299874 2023-07-21 18:15:02,474 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34959-0x101891806670002, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44645,1689963300199 2023-07-21 18:15:02,475 DEBUG [RS:3;jenkins-hbase4:39687] zookeeper.ZKUtil(162): regionserver:39687-0x10189180667000b, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34959,1689963300038 2023-07-21 18:15:02,475 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34959-0x101891806670002, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39687,1689963302296 2023-07-21 18:15:02,475 DEBUG [RS:3;jenkins-hbase4:39687] zookeeper.ZKUtil(162): regionserver:39687-0x10189180667000b, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45925,1689963299874 2023-07-21 18:15:02,475 DEBUG [RS:3;jenkins-hbase4:39687] zookeeper.ZKUtil(162): regionserver:39687-0x10189180667000b, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44645,1689963300199 2023-07-21 18:15:02,476 DEBUG [RS:3;jenkins-hbase4:39687] zookeeper.ZKUtil(162): regionserver:39687-0x10189180667000b, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39687,1689963302296 2023-07-21 18:15:02,476 DEBUG [RS:3;jenkins-hbase4:39687] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 18:15:02,477 INFO [RS:3;jenkins-hbase4:39687] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 18:15:02,478 INFO [RS:3;jenkins-hbase4:39687] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 18:15:02,478 INFO [RS:3;jenkins-hbase4:39687] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 18:15:02,478 INFO [RS:3;jenkins-hbase4:39687] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:02,478 INFO [RS:3;jenkins-hbase4:39687] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 18:15:02,480 INFO [RS:3;jenkins-hbase4:39687] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:02,481 DEBUG [RS:3;jenkins-hbase4:39687] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:02,481 DEBUG [RS:3;jenkins-hbase4:39687] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:02,481 DEBUG [RS:3;jenkins-hbase4:39687] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:02,481 DEBUG [RS:3;jenkins-hbase4:39687] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:02,481 DEBUG [RS:3;jenkins-hbase4:39687] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:02,481 DEBUG [RS:3;jenkins-hbase4:39687] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 18:15:02,481 DEBUG [RS:3;jenkins-hbase4:39687] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:02,481 DEBUG [RS:3;jenkins-hbase4:39687] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:02,481 DEBUG [RS:3;jenkins-hbase4:39687] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:02,481 DEBUG [RS:3;jenkins-hbase4:39687] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 18:15:02,484 INFO [RS:3;jenkins-hbase4:39687] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:02,484 INFO [RS:3;jenkins-hbase4:39687] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:02,484 INFO [RS:3;jenkins-hbase4:39687] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:02,495 INFO [RS:3;jenkins-hbase4:39687] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 18:15:02,495 INFO [RS:3;jenkins-hbase4:39687] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39687,1689963302296-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 18:15:02,506 INFO [RS:3;jenkins-hbase4:39687] regionserver.Replication(203): jenkins-hbase4.apache.org,39687,1689963302296 started 2023-07-21 18:15:02,506 INFO [RS:3;jenkins-hbase4:39687] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39687,1689963302296, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39687, sessionid=0x10189180667000b 2023-07-21 18:15:02,506 DEBUG [RS:3;jenkins-hbase4:39687] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 18:15:02,506 DEBUG [RS:3;jenkins-hbase4:39687] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39687,1689963302296 2023-07-21 18:15:02,506 DEBUG [RS:3;jenkins-hbase4:39687] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39687,1689963302296' 2023-07-21 18:15:02,506 DEBUG [RS:3;jenkins-hbase4:39687] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 18:15:02,506 DEBUG [RS:3;jenkins-hbase4:39687] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 18:15:02,507 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 18:15:02,507 DEBUG [RS:3;jenkins-hbase4:39687] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 18:15:02,507 DEBUG [RS:3;jenkins-hbase4:39687] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 18:15:02,507 DEBUG [RS:3;jenkins-hbase4:39687] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39687,1689963302296 2023-07-21 18:15:02,507 DEBUG [RS:3;jenkins-hbase4:39687] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39687,1689963302296' 2023-07-21 18:15:02,507 DEBUG [RS:3;jenkins-hbase4:39687] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 18:15:02,507 DEBUG [RS:3;jenkins-hbase4:39687] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 18:15:02,508 DEBUG [RS:3;jenkins-hbase4:39687] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 18:15:02,508 INFO [RS:3;jenkins-hbase4:39687] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 18:15:02,508 INFO [RS:3;jenkins-hbase4:39687] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 18:15:02,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:15:02,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:15:02,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:15:02,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:15:02,514 DEBUG [hconnection-0x7a3db533-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 18:15:02,515 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37126, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 18:15:02,519 DEBUG [hconnection-0x7a3db533-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 18:15:02,520 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54068, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 18:15:02,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:02,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:02,525 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34779] to rsgroup master 2023-07-21 18:15:02,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:15:02,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:46000 deadline: 1689964502525, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. 2023-07-21 18:15:02,526 WARN [Listener at localhost/43809] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 18:15:02,527 INFO [Listener at localhost/43809] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:15:02,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:02,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:02,528 INFO [Listener at localhost/43809] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34959, jenkins-hbase4.apache.org:39687, jenkins-hbase4.apache.org:44645, jenkins-hbase4.apache.org:45925], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 18:15:02,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:15:02,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:15:02,578 INFO [Listener at localhost/43809] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=556 (was 523) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:42925 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=34779 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34959 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1907410722-2350 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689963300640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=45925 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/cluster_f875a085-4694-55b9-cdf9-eae6f0017a24/dfs/data/data6/current/BP-1517975358-172.31.14.131-1689963298839 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:44645Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_550286816_17 at /127.0.0.1:42634 [Receiving block BP-1517975358-172.31.14.131-1689963298839:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43809-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Session-HouseKeeper-6dcecf20-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1-prefix:jenkins-hbase4.apache.org,44645,1689963300199.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60536@0x3f8504c5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$89/999208717.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34779 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins@localhost:34709 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45925 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/43809-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp123461162-2303-acceptor-0@6f4ce28c-ServerConnector@7423cc75{HTTP/1.1, (http/1.1)}{0.0.0.0:36127} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/cluster_f875a085-4694-55b9-cdf9-eae6f0017a24/dfs/data/data2/current/BP-1517975358-172.31.14.131-1689963298839 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-379d8987-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-556-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1452bb87-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60536@0x6642d7a7-SendThread(127.0.0.1:60536) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 0 on default port 45789 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1907410722-2345 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/122747294.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 45789 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp37107660-2274 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5c5f61f7-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1907410722-2344 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/122747294.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:45925-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43809.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1373228205_17 at /127.0.0.1:42616 [Receiving block BP-1517975358-172.31.14.131-1689963298839:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1668540005_17 at /127.0.0.1:36306 [Receiving block BP-1517975358-172.31.14.131-1689963298839:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60536@0x6e34bb1b sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$89/999208717.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1668540005_17 at /127.0.0.1:42582 [Receiving block BP-1517975358-172.31.14.131-1689963298839:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 43809 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1728368370-2244 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:34709 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/MasterData-prefix:jenkins-hbase4.apache.org,34779,1689963299707 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp123461162-2304 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:45925Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp227946482-2615 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60536@0x29ab8974-SendThread(127.0.0.1:60536) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 1 on default port 34709 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp123461162-2306 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp123461162-2302 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/122747294.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 43809 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/43809-SendThread(127.0.0.1:60536) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: Session-HouseKeeper-2032183b-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60536@0x6b99e8dc-SendThread(127.0.0.1:60536) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost/43809-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: jenkins-hbase4:39687Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60536@0x3f8504c5-SendThread(127.0.0.1:60536) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-1517975358-172.31.14.131-1689963298839:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45925 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1728368370-2242-acceptor-0@19a3214d-ServerConnector@1c0afd76{HTTP/1.1, (http/1.1)}{0.0.0.0:39617} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7a3db533-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/cluster_f875a085-4694-55b9-cdf9-eae6f0017a24/dfs/data/data3/current/BP-1517975358-172.31.14.131-1689963298839 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:34959-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:42925 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60536@0x29ab8974-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=34959 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp123461162-2305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp37107660-2276 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp609989654-2332 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/122747294.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 43809 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-514520014_17 at /127.0.0.1:36382 [Receiving block BP-1517975358-172.31.14.131-1689963298839:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1907410722-2348 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: hconnection-0x1452bb87-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43809-SendThread(127.0.0.1:60536) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp227946482-2610 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@45ea5797[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@34c442b0 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-514520014_17 at /127.0.0.1:36354 [Receiving block BP-1517975358-172.31.14.131-1689963298839:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60536@0x6b99e8dc sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$89/999208717.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 38185 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34779 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: BP-1517975358-172.31.14.131-1689963298839 heartbeating to localhost/127.0.0.1:34709 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp37107660-2272 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/122747294.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1452bb87-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp37107660-2275 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=39687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60536@0x6e34bb1b-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: jenkins-hbase4:34959Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60536@0x66006577-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:34709 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=44645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp37107660-2278 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp609989654-2337 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34779 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60536@0x29ab8974 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$89/999208717.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@401c718[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1373228205_17 at /127.0.0.1:53598 [Receiving block BP-1517975358-172.31.14.131-1689963298839:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:42925 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:34709 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp37107660-2273-acceptor-0@f6b8f56-ServerConnector@42d0e97d{HTTP/1.1, (http/1.1)}{0.0.0.0:35771} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1517975358-172.31.14.131-1689963298839:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60536@0x6642d7a7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$89/999208717.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1-prefix:jenkins-hbase4.apache.org,34959,1689963300038 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45925 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46525,1689963293524 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-514520014_17 at /127.0.0.1:53604 [Receiving block BP-1517975358-172.31.14.131-1689963298839:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1540177555@qtp-639488987-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35661 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: Listener at localhost/43809.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp609989654-2339 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1728368370-2245 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_550286816_17 at /127.0.0.1:36366 [Receiving block BP-1517975358-172.31.14.131-1689963298839:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp37107660-2279 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-16 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp609989654-2338 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@7e2a0835 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 38185 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 4 on default port 43809 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1907410722-2343 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/122747294.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1728368370-2243 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51543@0x4fe30c6b-SendThread(127.0.0.1:51543) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60536@0x196bd676-SendThread(127.0.0.1:60536) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server idle connection scanner for port 45789 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=45925 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-566-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp609989654-2336 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 45789 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34959 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689963300640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/cluster_f875a085-4694-55b9-cdf9-eae6f0017a24/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: M:0;jenkins-hbase4:34779 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1728368370-2247 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-514520014_17 at /127.0.0.1:53620 [Receiving block BP-1517975358-172.31.14.131-1689963298839:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 105018280@qtp-1262208426-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36187 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp123461162-2309 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-514520014_17 at /127.0.0.1:42626 [Receiving block BP-1517975358-172.31.14.131-1689963298839:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1517975358-172.31.14.131-1689963298839:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:60536 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: Listener at localhost/40193-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60536@0x66006577-SendThread(127.0.0.1:60536) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_550286816_17 at /127.0.0.1:42554 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1452bb87-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:34709 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp227946482-2614 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 776820827@qtp-1262208426-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51543@0x4fe30c6b-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=34959 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:42925 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60536@0x3f8504c5-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 4 on default port 34709 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1728368370-2246 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 45789 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:42925 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: pool-545-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/cluster_f875a085-4694-55b9-cdf9-eae6f0017a24/dfs/data/data5/current/BP-1517975358-172.31.14.131-1689963298839 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/cluster_f875a085-4694-55b9-cdf9-eae6f0017a24/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp227946482-2612 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:34709 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@727205a5 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@6e70123 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 3 on default port 34709 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=45925 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x1452bb87-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@6014c935 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1728368370-2241 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/122747294.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1517975358-172.31.14.131-1689963298839:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:45925 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60536@0x6e34bb1b-SendThread(127.0.0.1:60536) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60536@0x6b99e8dc-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34779,1689963299707 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp123461162-2308 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7a3db533-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:42925 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34959 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34779 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1517975358-172.31.14.131-1689963298839:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@60599a6d java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-565-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:39687 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@19745946 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1452bb87-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43809-SendThread(127.0.0.1:60536) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: hconnection-0x1452bb87-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:34709 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Listener at localhost/43809.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1-prefix:jenkins-hbase4.apache.org,45925,1689963299874 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/cluster_f875a085-4694-55b9-cdf9-eae6f0017a24/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp123461162-2307 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:34959 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: 894640102@qtp-1013201932-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@2a04933e java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=39687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-550-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@304a0fdb sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1907410722-2346 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/122747294.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60536@0x6642d7a7-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 3 on default port 38185 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60536@0x196bd676-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp609989654-2333-acceptor-0@1162d826-ServerConnector@688c684{HTTP/1.1, (http/1.1)}{0.0.0.0:42529} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1452bb87-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp227946482-2611 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/cluster_f875a085-4694-55b9-cdf9-eae6f0017a24/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:34709 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 2 on default port 45789 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/43809-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1907410722-2349 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1517975358-172.31.14.131-1689963298839:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51543@0x4fe30c6b sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$89/999208717.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=45925 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34959 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/40193-SendThread(127.0.0.1:51543) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=34959 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=45925 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 38185 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/cluster_f875a085-4694-55b9-cdf9-eae6f0017a24/dfs/data/data4/current/BP-1517975358-172.31.14.131-1689963298839 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:42925 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60536@0x66006577 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$89/999208717.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1517975358-172.31.14.131-1689963298839:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 34709 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 2 on default port 34709 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 118986766@qtp-314623284-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41395 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp1907410722-2347-acceptor-0@62e1389c-ServerConnector@7f773764{HTTP/1.1, (http/1.1)}{0.0.0.0:39465} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:42925 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 38185 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/43809-SendThread(127.0.0.1:60536) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=45925 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=39687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ProcessThread(sid:0 cport:60536): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: Session-HouseKeeper-512528f6-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 38185 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-551-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=34959 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:34709 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp227946482-2609-acceptor-0@2f192d8-ServerConnector@1c41e666{HTTP/1.1, (http/1.1)}{0.0.0.0:45393} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=45925 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataNode DiskChecker thread 1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1517975358-172.31.14.131-1689963298839:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1517975358-172.31.14.131-1689963298839 heartbeating to localhost/127.0.0.1:34709 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:34779 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1728368370-2248 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1517975358-172.31.14.131-1689963298839 heartbeating to localhost/127.0.0.1:34709 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1373228205_17 at /127.0.0.1:36338 [Receiving block BP-1517975358-172.31.14.131-1689963298839:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1517975358-172.31.14.131-1689963298839:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 43809 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 0 on default port 34709 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=39687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-6eacd45b-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1517975358-172.31.14.131-1689963298839:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=34779 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:44645-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 350762853@qtp-314623284-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60536@0x196bd676 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$89/999208717.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp609989654-2335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@34c542df java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43809-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@46902ed9 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1517975358-172.31.14.131-1689963298839:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: CacheReplicationMonitor(1638088195) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: qtp227946482-2608 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/122747294.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34959 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/43809 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1-prefix:jenkins-hbase4.apache.org,44645,1689963300199 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@1e7c836c java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@4db918e8[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-570-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 43809 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1668540005_17 at /127.0.0.1:53564 [Receiving block BP-1517975358-172.31.14.131-1689963298839:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:44645 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1517975358-172.31.14.131-1689963298839:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 639390122@qtp-1013201932-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40723 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: PacketResponder: BP-1517975358-172.31.14.131-1689963298839:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=34779 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1517975358-172.31.14.131-1689963298839:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp227946482-2613 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/cluster_f875a085-4694-55b9-cdf9-eae6f0017a24/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-514520014_17 at /127.0.0.1:42650 [Receiving block BP-1517975358-172.31.14.131-1689963298839:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=34779 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1517975358-172.31.14.131-1689963298839:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43809-SendThread(127.0.0.1:60536) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:34709 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_550286816_17 at /127.0.0.1:53614 [Receiving block BP-1517975358-172.31.14.131-1689963298839:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43809.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=34779 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:3;jenkins-hbase4:39687-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp37107660-2277 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-561-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@5e73f58b java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/cluster_f875a085-4694-55b9-cdf9-eae6f0017a24/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Listener at localhost/43809-SendThread(127.0.0.1:60536) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/cluster_f875a085-4694-55b9-cdf9-eae6f0017a24/dfs/data/data1/current/BP-1517975358-172.31.14.131-1689963298839 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43809-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp609989654-2334 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=34959 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:42925 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: 2083819466@qtp-639488987-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) - Thread LEAK? -, OpenFileDescriptor=833 (was 823) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=526 (was 561), ProcessCount=173 (was 174), AvailableMemoryMB=7260 (was 7431) 2023-07-21 18:15:02,582 WARN [Listener at localhost/43809] hbase.ResourceChecker(130): Thread=556 is superior to 500 2023-07-21 18:15:02,602 INFO [Listener at localhost/43809] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=556, OpenFileDescriptor=833, MaxFileDescriptor=60000, SystemLoadAverage=526, ProcessCount=173, AvailableMemoryMB=7259 2023-07-21 18:15:02,602 WARN [Listener at localhost/43809] hbase.ResourceChecker(130): Thread=556 is superior to 500 2023-07-21 18:15:02,602 INFO [Listener at localhost/43809] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-21 18:15:02,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:02,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:02,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:15:02,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:15:02,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:15:02,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 18:15:02,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:15:02,609 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 18:15:02,609 INFO [RS:3;jenkins-hbase4:39687] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39687%2C1689963302296, suffix=, logDir=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/WALs/jenkins-hbase4.apache.org,39687,1689963302296, archiveDir=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/oldWALs, maxLogs=32 2023-07-21 18:15:02,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:15:02,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 18:15:02,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:15:02,616 INFO [Listener at localhost/43809] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 18:15:02,616 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 18:15:02,618 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:15:02,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:15:02,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:15:02,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:15:02,629 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42999,DS-fbf654ae-10e5-4a82-91b9-5bfa1822d1e5,DISK] 2023-07-21 18:15:02,629 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36115,DS-b045540e-f2f7-4c13-9b54-019e5d307dcf,DISK] 2023-07-21 18:15:02,629 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35465,DS-b1f46eb7-7fc9-4530-99e9-bb8bffad3df7,DISK] 2023-07-21 18:15:02,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:02,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:02,631 INFO [RS:3;jenkins-hbase4:39687] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/WALs/jenkins-hbase4.apache.org,39687,1689963302296/jenkins-hbase4.apache.org%2C39687%2C1689963302296.1689963302610 2023-07-21 18:15:02,631 DEBUG [RS:3;jenkins-hbase4:39687] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42999,DS-fbf654ae-10e5-4a82-91b9-5bfa1822d1e5,DISK], DatanodeInfoWithStorage[127.0.0.1:36115,DS-b045540e-f2f7-4c13-9b54-019e5d307dcf,DISK], DatanodeInfoWithStorage[127.0.0.1:35465,DS-b1f46eb7-7fc9-4530-99e9-bb8bffad3df7,DISK]] 2023-07-21 18:15:02,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34779] to rsgroup master 2023-07-21 18:15:02,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:15:02,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:46000 deadline: 1689964502632, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. 2023-07-21 18:15:02,632 WARN [Listener at localhost/43809] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 18:15:02,634 INFO [Listener at localhost/43809] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:15:02,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:02,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:02,635 INFO [Listener at localhost/43809] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34959, jenkins-hbase4.apache.org:39687, jenkins-hbase4.apache.org:44645, jenkins-hbase4.apache.org:45925], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 18:15:02,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:15:02,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:15:02,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 18:15:02,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-21 18:15:02,639 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 18:15:02,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-21 18:15:02,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 18:15:02,641 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:15:02,641 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:15:02,641 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:15:02,644 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 18:15:02,645 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/.tmp/data/default/t1/67a84a0700d78aab502ab4100c0dfc29 2023-07-21 18:15:02,646 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/.tmp/data/default/t1/67a84a0700d78aab502ab4100c0dfc29 empty. 2023-07-21 18:15:02,646 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/.tmp/data/default/t1/67a84a0700d78aab502ab4100c0dfc29 2023-07-21 18:15:02,646 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-21 18:15:02,657 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-21 18:15:02,659 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 67a84a0700d78aab502ab4100c0dfc29, NAME => 't1,,1689963302637.67a84a0700d78aab502ab4100c0dfc29.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/.tmp 2023-07-21 18:15:02,670 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689963302637.67a84a0700d78aab502ab4100c0dfc29.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:15:02,670 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 67a84a0700d78aab502ab4100c0dfc29, disabling compactions & flushes 2023-07-21 18:15:02,670 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689963302637.67a84a0700d78aab502ab4100c0dfc29. 2023-07-21 18:15:02,671 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689963302637.67a84a0700d78aab502ab4100c0dfc29. 2023-07-21 18:15:02,671 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689963302637.67a84a0700d78aab502ab4100c0dfc29. after waiting 0 ms 2023-07-21 18:15:02,671 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689963302637.67a84a0700d78aab502ab4100c0dfc29. 2023-07-21 18:15:02,671 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689963302637.67a84a0700d78aab502ab4100c0dfc29. 2023-07-21 18:15:02,671 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 67a84a0700d78aab502ab4100c0dfc29: 2023-07-21 18:15:02,673 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 18:15:02,674 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689963302637.67a84a0700d78aab502ab4100c0dfc29.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689963302674"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963302674"}]},"ts":"1689963302674"} 2023-07-21 18:15:02,675 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 18:15:02,676 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 18:15:02,676 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963302676"}]},"ts":"1689963302676"} 2023-07-21 18:15:02,677 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-21 18:15:02,680 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 18:15:02,680 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 18:15:02,680 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 18:15:02,680 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 18:15:02,680 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-21 18:15:02,680 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 18:15:02,681 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=67a84a0700d78aab502ab4100c0dfc29, ASSIGN}] 2023-07-21 18:15:02,682 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=67a84a0700d78aab502ab4100c0dfc29, ASSIGN 2023-07-21 18:15:02,683 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=67a84a0700d78aab502ab4100c0dfc29, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44645,1689963300199; forceNewPlan=false, retain=false 2023-07-21 18:15:02,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 18:15:02,833 INFO [jenkins-hbase4:34779] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 18:15:02,834 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=67a84a0700d78aab502ab4100c0dfc29, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44645,1689963300199 2023-07-21 18:15:02,835 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689963302637.67a84a0700d78aab502ab4100c0dfc29.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689963302834"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963302834"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963302834"}]},"ts":"1689963302834"} 2023-07-21 18:15:02,836 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 67a84a0700d78aab502ab4100c0dfc29, server=jenkins-hbase4.apache.org,44645,1689963300199}] 2023-07-21 18:15:02,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 18:15:02,992 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689963302637.67a84a0700d78aab502ab4100c0dfc29. 2023-07-21 18:15:02,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 67a84a0700d78aab502ab4100c0dfc29, NAME => 't1,,1689963302637.67a84a0700d78aab502ab4100c0dfc29.', STARTKEY => '', ENDKEY => ''} 2023-07-21 18:15:02,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 67a84a0700d78aab502ab4100c0dfc29 2023-07-21 18:15:02,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689963302637.67a84a0700d78aab502ab4100c0dfc29.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 18:15:02,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 67a84a0700d78aab502ab4100c0dfc29 2023-07-21 18:15:02,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 67a84a0700d78aab502ab4100c0dfc29 2023-07-21 18:15:02,994 INFO [StoreOpener-67a84a0700d78aab502ab4100c0dfc29-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 67a84a0700d78aab502ab4100c0dfc29 2023-07-21 18:15:02,995 DEBUG [StoreOpener-67a84a0700d78aab502ab4100c0dfc29-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/default/t1/67a84a0700d78aab502ab4100c0dfc29/cf1 2023-07-21 18:15:02,995 DEBUG [StoreOpener-67a84a0700d78aab502ab4100c0dfc29-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/default/t1/67a84a0700d78aab502ab4100c0dfc29/cf1 2023-07-21 18:15:02,995 INFO [StoreOpener-67a84a0700d78aab502ab4100c0dfc29-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 67a84a0700d78aab502ab4100c0dfc29 columnFamilyName cf1 2023-07-21 18:15:02,996 INFO [StoreOpener-67a84a0700d78aab502ab4100c0dfc29-1] regionserver.HStore(310): Store=67a84a0700d78aab502ab4100c0dfc29/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 18:15:02,997 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/default/t1/67a84a0700d78aab502ab4100c0dfc29 2023-07-21 18:15:02,997 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/default/t1/67a84a0700d78aab502ab4100c0dfc29 2023-07-21 18:15:03,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 67a84a0700d78aab502ab4100c0dfc29 2023-07-21 18:15:03,001 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/default/t1/67a84a0700d78aab502ab4100c0dfc29/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 18:15:03,002 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 67a84a0700d78aab502ab4100c0dfc29; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11259377440, jitterRate=0.04861123859882355}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 18:15:03,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 67a84a0700d78aab502ab4100c0dfc29: 2023-07-21 18:15:03,003 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689963302637.67a84a0700d78aab502ab4100c0dfc29., pid=14, masterSystemTime=1689963302988 2023-07-21 18:15:03,007 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689963302637.67a84a0700d78aab502ab4100c0dfc29. 2023-07-21 18:15:03,007 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689963302637.67a84a0700d78aab502ab4100c0dfc29. 2023-07-21 18:15:03,007 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=67a84a0700d78aab502ab4100c0dfc29, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44645,1689963300199 2023-07-21 18:15:03,008 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689963302637.67a84a0700d78aab502ab4100c0dfc29.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689963303007"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689963303007"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689963303007"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689963303007"}]},"ts":"1689963303007"} 2023-07-21 18:15:03,010 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-21 18:15:03,011 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 67a84a0700d78aab502ab4100c0dfc29, server=jenkins-hbase4.apache.org,44645,1689963300199 in 173 msec 2023-07-21 18:15:03,012 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-21 18:15:03,012 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=67a84a0700d78aab502ab4100c0dfc29, ASSIGN in 329 msec 2023-07-21 18:15:03,015 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 18:15:03,015 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963303015"}]},"ts":"1689963303015"} 2023-07-21 18:15:03,016 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-21 18:15:03,018 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 18:15:03,020 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 381 msec 2023-07-21 18:15:03,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 18:15:03,243 INFO [Listener at localhost/43809] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-21 18:15:03,243 DEBUG [Listener at localhost/43809] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-21 18:15:03,243 INFO [Listener at localhost/43809] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:15:03,245 INFO [Listener at localhost/43809] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-21 18:15:03,246 INFO [Listener at localhost/43809] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:15:03,246 INFO [Listener at localhost/43809] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-21 18:15:03,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 18:15:03,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-21 18:15:03,250 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 18:15:03,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-21 18:15:03,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 352 connection: 172.31.14.131:46000 deadline: 1689963363247, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-21 18:15:03,252 INFO [Listener at localhost/43809] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:15:03,253 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=6 msec 2023-07-21 18:15:03,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:15:03,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:15:03,354 INFO [Listener at localhost/43809] client.HBaseAdmin$15(890): Started disable of t1 2023-07-21 18:15:03,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-21 18:15:03,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-21 18:15:03,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 18:15:03,358 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963303358"}]},"ts":"1689963303358"} 2023-07-21 18:15:03,360 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-21 18:15:03,362 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-21 18:15:03,363 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=67a84a0700d78aab502ab4100c0dfc29, UNASSIGN}] 2023-07-21 18:15:03,363 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=67a84a0700d78aab502ab4100c0dfc29, UNASSIGN 2023-07-21 18:15:03,364 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=67a84a0700d78aab502ab4100c0dfc29, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44645,1689963300199 2023-07-21 18:15:03,364 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689963302637.67a84a0700d78aab502ab4100c0dfc29.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689963303364"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689963303364"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689963303364"}]},"ts":"1689963303364"} 2023-07-21 18:15:03,365 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 67a84a0700d78aab502ab4100c0dfc29, server=jenkins-hbase4.apache.org,44645,1689963300199}] 2023-07-21 18:15:03,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 18:15:03,518 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 67a84a0700d78aab502ab4100c0dfc29 2023-07-21 18:15:03,518 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 67a84a0700d78aab502ab4100c0dfc29, disabling compactions & flushes 2023-07-21 18:15:03,518 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689963302637.67a84a0700d78aab502ab4100c0dfc29. 2023-07-21 18:15:03,518 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689963302637.67a84a0700d78aab502ab4100c0dfc29. 2023-07-21 18:15:03,518 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689963302637.67a84a0700d78aab502ab4100c0dfc29. after waiting 0 ms 2023-07-21 18:15:03,518 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689963302637.67a84a0700d78aab502ab4100c0dfc29. 2023-07-21 18:15:03,522 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/default/t1/67a84a0700d78aab502ab4100c0dfc29/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 18:15:03,522 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689963302637.67a84a0700d78aab502ab4100c0dfc29. 2023-07-21 18:15:03,522 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 67a84a0700d78aab502ab4100c0dfc29: 2023-07-21 18:15:03,524 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 67a84a0700d78aab502ab4100c0dfc29 2023-07-21 18:15:03,524 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=67a84a0700d78aab502ab4100c0dfc29, regionState=CLOSED 2023-07-21 18:15:03,524 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689963302637.67a84a0700d78aab502ab4100c0dfc29.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689963303524"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689963303524"}]},"ts":"1689963303524"} 2023-07-21 18:15:03,527 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-21 18:15:03,527 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 67a84a0700d78aab502ab4100c0dfc29, server=jenkins-hbase4.apache.org,44645,1689963300199 in 160 msec 2023-07-21 18:15:03,528 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-21 18:15:03,528 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=67a84a0700d78aab502ab4100c0dfc29, UNASSIGN in 164 msec 2023-07-21 18:15:03,529 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689963303528"}]},"ts":"1689963303528"} 2023-07-21 18:15:03,530 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-21 18:15:03,532 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-21 18:15:03,534 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 178 msec 2023-07-21 18:15:03,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 18:15:03,660 INFO [Listener at localhost/43809] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-21 18:15:03,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-21 18:15:03,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-21 18:15:03,664 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-21 18:15:03,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-21 18:15:03,665 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-21 18:15:03,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:15:03,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:15:03,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:15:03,669 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/.tmp/data/default/t1/67a84a0700d78aab502ab4100c0dfc29 2023-07-21 18:15:03,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-21 18:15:03,670 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/.tmp/data/default/t1/67a84a0700d78aab502ab4100c0dfc29/cf1, FileablePath, hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/.tmp/data/default/t1/67a84a0700d78aab502ab4100c0dfc29/recovered.edits] 2023-07-21 18:15:03,676 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/.tmp/data/default/t1/67a84a0700d78aab502ab4100c0dfc29/recovered.edits/4.seqid to hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/archive/data/default/t1/67a84a0700d78aab502ab4100c0dfc29/recovered.edits/4.seqid 2023-07-21 18:15:03,677 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/.tmp/data/default/t1/67a84a0700d78aab502ab4100c0dfc29 2023-07-21 18:15:03,677 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-21 18:15:03,680 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-21 18:15:03,681 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-21 18:15:03,683 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-21 18:15:03,684 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-21 18:15:03,684 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-21 18:15:03,684 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689963302637.67a84a0700d78aab502ab4100c0dfc29.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689963303684"}]},"ts":"9223372036854775807"} 2023-07-21 18:15:03,686 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 18:15:03,686 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 67a84a0700d78aab502ab4100c0dfc29, NAME => 't1,,1689963302637.67a84a0700d78aab502ab4100c0dfc29.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 18:15:03,686 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-21 18:15:03,686 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689963303686"}]},"ts":"9223372036854775807"} 2023-07-21 18:15:03,687 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-21 18:15:03,689 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-21 18:15:03,691 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 28 msec 2023-07-21 18:15:03,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-21 18:15:03,771 INFO [Listener at localhost/43809] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-21 18:15:03,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:03,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:03,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:15:03,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:15:03,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:15:03,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 18:15:03,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:15:03,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 18:15:03,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:15:03,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 18:15:03,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:15:03,793 INFO [Listener at localhost/43809] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 18:15:03,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 18:15:03,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:15:03,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:15:03,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:15:03,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:15:03,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:03,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:03,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34779] to rsgroup master 2023-07-21 18:15:03,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:15:03,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:46000 deadline: 1689964503816, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. 2023-07-21 18:15:03,817 WARN [Listener at localhost/43809] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 18:15:03,822 INFO [Listener at localhost/43809] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:15:03,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:03,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:03,823 INFO [Listener at localhost/43809] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34959, jenkins-hbase4.apache.org:39687, jenkins-hbase4.apache.org:44645, jenkins-hbase4.apache.org:45925], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 18:15:03,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:15:03,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:15:03,847 INFO [Listener at localhost/43809] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=571 (was 556) - Thread LEAK? -, OpenFileDescriptor=842 (was 833) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=526 (was 526), ProcessCount=173 (was 173), AvailableMemoryMB=7232 (was 7259) 2023-07-21 18:15:03,847 WARN [Listener at localhost/43809] hbase.ResourceChecker(130): Thread=571 is superior to 500 2023-07-21 18:15:03,867 INFO [Listener at localhost/43809] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=571, OpenFileDescriptor=842, MaxFileDescriptor=60000, SystemLoadAverage=526, ProcessCount=173, AvailableMemoryMB=7232 2023-07-21 18:15:03,867 WARN [Listener at localhost/43809] hbase.ResourceChecker(130): Thread=571 is superior to 500 2023-07-21 18:15:03,867 INFO [Listener at localhost/43809] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-21 18:15:03,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:03,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:03,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:15:03,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:15:03,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:15:03,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 18:15:03,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:15:03,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 18:15:03,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:15:03,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 18:15:03,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:15:03,881 INFO [Listener at localhost/43809] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 18:15:03,882 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 18:15:03,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:15:03,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:15:03,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:15:03,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:15:03,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:03,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:03,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34779] to rsgroup master 2023-07-21 18:15:03,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:15:03,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46000 deadline: 1689964503892, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. 2023-07-21 18:15:03,892 WARN [Listener at localhost/43809] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 18:15:03,894 INFO [Listener at localhost/43809] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:15:03,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:03,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:03,895 INFO [Listener at localhost/43809] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34959, jenkins-hbase4.apache.org:39687, jenkins-hbase4.apache.org:44645, jenkins-hbase4.apache.org:45925], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 18:15:03,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:15:03,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:15:03,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-21 18:15:03,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 18:15:03,898 INFO [Listener at localhost/43809] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-21 18:15:03,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-21 18:15:03,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 18:15:03,908 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:03,908 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:03,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:15:03,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:15:03,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:15:03,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 18:15:03,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:15:03,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 18:15:03,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:15:03,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 18:15:03,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:15:03,922 INFO [Listener at localhost/43809] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 18:15:03,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 18:15:03,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:15:03,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:15:03,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:15:03,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:15:03,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:03,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:03,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34779] to rsgroup master 2023-07-21 18:15:03,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:15:03,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46000 deadline: 1689964503934, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. 2023-07-21 18:15:03,936 WARN [Listener at localhost/43809] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 18:15:03,938 INFO [Listener at localhost/43809] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:15:03,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:03,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:03,940 INFO [Listener at localhost/43809] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34959, jenkins-hbase4.apache.org:39687, jenkins-hbase4.apache.org:44645, jenkins-hbase4.apache.org:45925], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 18:15:03,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:15:03,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:15:03,962 INFO [Listener at localhost/43809] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=573 (was 571) - Thread LEAK? -, OpenFileDescriptor=842 (was 842), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=526 (was 526), ProcessCount=173 (was 173), AvailableMemoryMB=7228 (was 7232) 2023-07-21 18:15:03,962 WARN [Listener at localhost/43809] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-21 18:15:03,981 INFO [Listener at localhost/43809] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=573, OpenFileDescriptor=842, MaxFileDescriptor=60000, SystemLoadAverage=526, ProcessCount=173, AvailableMemoryMB=7228 2023-07-21 18:15:03,981 WARN [Listener at localhost/43809] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-21 18:15:03,981 INFO [Listener at localhost/43809] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-21 18:15:03,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:03,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:03,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:15:03,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:15:03,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:15:03,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 18:15:03,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:15:03,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 18:15:03,990 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:15:03,990 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 18:15:03,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:15:03,994 INFO [Listener at localhost/43809] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 18:15:03,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 18:15:03,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:15:03,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:15:03,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:15:04,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:15:04,002 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:04,003 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:04,004 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34779] to rsgroup master 2023-07-21 18:15:04,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:15:04,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46000 deadline: 1689964504004, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. 2023-07-21 18:15:04,005 WARN [Listener at localhost/43809] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 18:15:04,007 INFO [Listener at localhost/43809] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:15:04,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:04,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:04,008 INFO [Listener at localhost/43809] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34959, jenkins-hbase4.apache.org:39687, jenkins-hbase4.apache.org:44645, jenkins-hbase4.apache.org:45925], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 18:15:04,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:15:04,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:15:04,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:04,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:04,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:15:04,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:15:04,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:15:04,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 18:15:04,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:15:04,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 18:15:04,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:15:04,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 18:15:04,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:15:04,028 INFO [Listener at localhost/43809] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 18:15:04,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 18:15:04,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:15:04,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:15:04,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:15:04,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:15:04,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:04,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:04,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34779] to rsgroup master 2023-07-21 18:15:04,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:15:04,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46000 deadline: 1689964504037, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. 2023-07-21 18:15:04,038 WARN [Listener at localhost/43809] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 18:15:04,040 INFO [Listener at localhost/43809] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:15:04,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:04,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:04,040 INFO [Listener at localhost/43809] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34959, jenkins-hbase4.apache.org:39687, jenkins-hbase4.apache.org:44645, jenkins-hbase4.apache.org:45925], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 18:15:04,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:15:04,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:15:04,062 INFO [Listener at localhost/43809] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=574 (was 573) - Thread LEAK? -, OpenFileDescriptor=842 (was 842), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=526 (was 526), ProcessCount=173 (was 173), AvailableMemoryMB=7230 (was 7228) - AvailableMemoryMB LEAK? - 2023-07-21 18:15:04,062 WARN [Listener at localhost/43809] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-21 18:15:04,083 INFO [Listener at localhost/43809] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=574, OpenFileDescriptor=842, MaxFileDescriptor=60000, SystemLoadAverage=526, ProcessCount=173, AvailableMemoryMB=7230 2023-07-21 18:15:04,083 WARN [Listener at localhost/43809] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-21 18:15:04,083 INFO [Listener at localhost/43809] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-21 18:15:04,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:04,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:04,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:15:04,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:15:04,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:15:04,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 18:15:04,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:15:04,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 18:15:04,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:15:04,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 18:15:04,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:15:04,103 INFO [Listener at localhost/43809] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 18:15:04,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 18:15:04,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:15:04,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:15:04,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:15:04,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:15:04,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:04,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:04,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34779] to rsgroup master 2023-07-21 18:15:04,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:15:04,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46000 deadline: 1689964504114, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. 2023-07-21 18:15:04,115 WARN [Listener at localhost/43809] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 18:15:04,117 INFO [Listener at localhost/43809] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:15:04,117 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:04,117 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:04,118 INFO [Listener at localhost/43809] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34959, jenkins-hbase4.apache.org:39687, jenkins-hbase4.apache.org:44645, jenkins-hbase4.apache.org:45925], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 18:15:04,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:15:04,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:15:04,119 INFO [Listener at localhost/43809] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-21 18:15:04,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-21 18:15:04,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-21 18:15:04,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:15:04,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:15:04,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 18:15:04,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:15:04,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:04,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:04,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-21 18:15:04,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-21 18:15:04,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 18:15:04,138 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 18:15:04,141 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 9 msec 2023-07-21 18:15:04,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 18:15:04,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-21 18:15:04,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:15:04,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:46000 deadline: 1689964504237, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-21 18:15:04,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-21 18:15:04,249 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-21 18:15:04,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-21 18:15:04,258 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-21 18:15:04,259 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 14 msec 2023-07-21 18:15:04,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-21 18:15:04,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-21 18:15:04,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-21 18:15:04,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:15:04,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-21 18:15:04,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:15:04,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 18:15:04,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:15:04,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:04,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:04,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-21 18:15:04,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 18:15:04,375 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 18:15:04,377 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 18:15:04,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-21 18:15:04,379 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 18:15:04,381 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-21 18:15:04,381 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 18:15:04,381 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 18:15:04,383 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 18:15:04,384 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 11 msec 2023-07-21 18:15:04,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-21 18:15:04,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-21 18:15:04,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-21 18:15:04,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:15:04,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:15:04,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-21 18:15:04,487 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:15:04,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:04,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:04,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:15:04,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:46000 deadline: 1689963364491, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-21 18:15:04,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:04,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:04,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:15:04,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:15:04,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:15:04,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 18:15:04,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:15:04,497 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-21 18:15:04,501 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:15:04,501 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:15:04,501 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 18:15:04,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:15:04,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 18:15:04,504 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 18:15:04,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 18:15:04,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 18:15:04,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 18:15:04,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 18:15:04,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:15:04,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 18:15:04,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 18:15:04,512 INFO [Listener at localhost/43809] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 18:15:04,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 18:15:04,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 18:15:04,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 18:15:04,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 18:15:04,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 18:15:04,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:04,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:04,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34779] to rsgroup master 2023-07-21 18:15:04,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 18:15:04,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46000 deadline: 1689964504533, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. 2023-07-21 18:15:04,534 WARN [Listener at localhost/43809] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34779 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 18:15:04,537 INFO [Listener at localhost/43809] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 18:15:04,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 18:15:04,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 18:15:04,538 INFO [Listener at localhost/43809] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34959, jenkins-hbase4.apache.org:39687, jenkins-hbase4.apache.org:44645, jenkins-hbase4.apache.org:45925], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 18:15:04,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 18:15:04,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34779] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 18:15:04,567 INFO [Listener at localhost/43809] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=574 (was 574), OpenFileDescriptor=842 (was 842), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=526 (was 526), ProcessCount=173 (was 173), AvailableMemoryMB=7205 (was 7230) 2023-07-21 18:15:04,567 WARN [Listener at localhost/43809] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-21 18:15:04,567 INFO [Listener at localhost/43809] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-21 18:15:04,567 INFO [Listener at localhost/43809] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-21 18:15:04,567 DEBUG [Listener at localhost/43809] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3f8504c5 to 127.0.0.1:60536 2023-07-21 18:15:04,568 DEBUG [Listener at localhost/43809] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:15:04,568 DEBUG [Listener at localhost/43809] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-21 18:15:04,568 DEBUG [Listener at localhost/43809] util.JVMClusterUtil(257): Found active master hash=946980588, stopped=false 2023-07-21 18:15:04,568 DEBUG [Listener at localhost/43809] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 18:15:04,568 DEBUG [Listener at localhost/43809] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 18:15:04,568 INFO [Listener at localhost/43809] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,34779,1689963299707 2023-07-21 18:15:04,570 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:44645-0x101891806670003, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 18:15:04,570 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:34959-0x101891806670002, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 18:15:04,571 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:45925-0x101891806670001, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 18:15:04,571 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:39687-0x10189180667000b, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 18:15:04,570 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 18:15:04,571 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:15:04,570 INFO [Listener at localhost/43809] procedure2.ProcedureExecutor(629): Stopping 2023-07-21 18:15:04,571 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44645-0x101891806670003, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 18:15:04,571 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45925-0x101891806670001, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 18:15:04,571 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39687-0x10189180667000b, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 18:15:04,571 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34959-0x101891806670002, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 18:15:04,571 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 18:15:04,572 DEBUG [Listener at localhost/43809] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x29ab8974 to 127.0.0.1:60536 2023-07-21 18:15:04,572 DEBUG [Listener at localhost/43809] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:15:04,572 INFO [Listener at localhost/43809] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45925,1689963299874' ***** 2023-07-21 18:15:04,572 INFO [Listener at localhost/43809] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 18:15:04,572 INFO [RS:0;jenkins-hbase4:45925] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 18:15:04,573 INFO [Listener at localhost/43809] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34959,1689963300038' ***** 2023-07-21 18:15:04,574 INFO [Listener at localhost/43809] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 18:15:04,574 INFO [Listener at localhost/43809] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44645,1689963300199' ***** 2023-07-21 18:15:04,574 INFO [Listener at localhost/43809] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 18:15:04,574 INFO [RS:1;jenkins-hbase4:34959] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 18:15:04,575 INFO [RS:2;jenkins-hbase4:44645] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 18:15:04,575 INFO [Listener at localhost/43809] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39687,1689963302296' ***** 2023-07-21 18:15:04,576 INFO [Listener at localhost/43809] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 18:15:04,576 INFO [RS:3;jenkins-hbase4:39687] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 18:15:04,581 INFO [RS:0;jenkins-hbase4:45925] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1056e046{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 18:15:04,581 INFO [RS:1;jenkins-hbase4:34959] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1f7499f6{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 18:15:04,581 INFO [RS:2;jenkins-hbase4:44645] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@74bbe23e{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 18:15:04,582 INFO [RS:3;jenkins-hbase4:39687] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@bac8428{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 18:15:04,582 INFO [RS:1;jenkins-hbase4:34959] server.AbstractConnector(383): Stopped ServerConnector@7423cc75{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 18:15:04,582 INFO [RS:0;jenkins-hbase4:45925] server.AbstractConnector(383): Stopped ServerConnector@42d0e97d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 18:15:04,582 INFO [RS:3;jenkins-hbase4:39687] server.AbstractConnector(383): Stopped ServerConnector@1c41e666{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 18:15:04,582 INFO [RS:1;jenkins-hbase4:34959] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 18:15:04,582 INFO [RS:2;jenkins-hbase4:44645] server.AbstractConnector(383): Stopped ServerConnector@688c684{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 18:15:04,583 INFO [RS:3;jenkins-hbase4:39687] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 18:15:04,583 INFO [RS:1;jenkins-hbase4:34959] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@16c0f7e8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 18:15:04,582 INFO [RS:0;jenkins-hbase4:45925] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 18:15:04,585 INFO [RS:1;jenkins-hbase4:34959] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@381ae7fe{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/hadoop.log.dir/,STOPPED} 2023-07-21 18:15:04,585 INFO [RS:3;jenkins-hbase4:39687] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@29eacbd3{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 18:15:04,583 INFO [RS:2;jenkins-hbase4:44645] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 18:15:04,585 INFO [RS:0;jenkins-hbase4:45925] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@513b2579{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 18:15:04,587 INFO [RS:2;jenkins-hbase4:44645] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@61dca119{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 18:15:04,586 INFO [RS:3;jenkins-hbase4:39687] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@57d9343d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/hadoop.log.dir/,STOPPED} 2023-07-21 18:15:04,588 INFO [RS:2;jenkins-hbase4:44645] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@57b5bf85{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/hadoop.log.dir/,STOPPED} 2023-07-21 18:15:04,588 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 18:15:04,587 INFO [RS:0;jenkins-hbase4:45925] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7d4bb916{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/hadoop.log.dir/,STOPPED} 2023-07-21 18:15:04,588 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 18:15:04,589 INFO [RS:1;jenkins-hbase4:34959] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 18:15:04,589 INFO [RS:1;jenkins-hbase4:34959] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 18:15:04,589 INFO [RS:1;jenkins-hbase4:34959] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 18:15:04,589 INFO [RS:2;jenkins-hbase4:44645] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 18:15:04,589 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 18:15:04,589 INFO [RS:2;jenkins-hbase4:44645] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 18:15:04,589 INFO [RS:3;jenkins-hbase4:39687] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 18:15:04,590 INFO [RS:3;jenkins-hbase4:39687] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 18:15:04,590 INFO [RS:3;jenkins-hbase4:39687] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 18:15:04,590 INFO [RS:3;jenkins-hbase4:39687] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39687,1689963302296 2023-07-21 18:15:04,590 DEBUG [RS:3;jenkins-hbase4:39687] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x66006577 to 127.0.0.1:60536 2023-07-21 18:15:04,590 DEBUG [RS:3;jenkins-hbase4:39687] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:15:04,590 INFO [RS:3;jenkins-hbase4:39687] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39687,1689963302296; all regions closed. 2023-07-21 18:15:04,589 INFO [RS:2;jenkins-hbase4:44645] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 18:15:04,589 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 18:15:04,590 INFO [RS:2;jenkins-hbase4:44645] regionserver.HRegionServer(3305): Received CLOSE for 1debd766be32a30cd334b2093439b36e 2023-07-21 18:15:04,589 INFO [RS:1;jenkins-hbase4:34959] regionserver.HRegionServer(3305): Received CLOSE for 00677051c5de5ea202c03eabf9ca4d7a 2023-07-21 18:15:04,591 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1debd766be32a30cd334b2093439b36e, disabling compactions & flushes 2023-07-21 18:15:04,589 INFO [RS:0;jenkins-hbase4:45925] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 18:15:04,591 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689963301298.1debd766be32a30cd334b2093439b36e. 2023-07-21 18:15:04,590 INFO [RS:2;jenkins-hbase4:44645] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44645,1689963300199 2023-07-21 18:15:04,591 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689963301298.1debd766be32a30cd334b2093439b36e. 2023-07-21 18:15:04,591 DEBUG [RS:2;jenkins-hbase4:44645] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6e34bb1b to 127.0.0.1:60536 2023-07-21 18:15:04,591 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689963301298.1debd766be32a30cd334b2093439b36e. after waiting 0 ms 2023-07-21 18:15:04,591 DEBUG [RS:2;jenkins-hbase4:44645] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:15:04,591 INFO [RS:2;jenkins-hbase4:44645] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 18:15:04,591 INFO [RS:2;jenkins-hbase4:44645] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 18:15:04,591 INFO [RS:2;jenkins-hbase4:44645] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 18:15:04,591 INFO [RS:2;jenkins-hbase4:44645] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-21 18:15:04,591 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689963301298.1debd766be32a30cd334b2093439b36e. 2023-07-21 18:15:04,591 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1debd766be32a30cd334b2093439b36e 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-21 18:15:04,592 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 18:15:04,592 INFO [RS:2;jenkins-hbase4:44645] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-21 18:15:04,592 DEBUG [RS:2;jenkins-hbase4:44645] regionserver.HRegionServer(1478): Online Regions={1debd766be32a30cd334b2093439b36e=hbase:namespace,,1689963301298.1debd766be32a30cd334b2093439b36e., 1588230740=hbase:meta,,1.1588230740} 2023-07-21 18:15:04,592 DEBUG [RS:2;jenkins-hbase4:44645] regionserver.HRegionServer(1504): Waiting on 1588230740, 1debd766be32a30cd334b2093439b36e 2023-07-21 18:15:04,592 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 18:15:04,592 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 18:15:04,592 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 18:15:04,592 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 18:15:04,592 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 18:15:04,592 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-21 18:15:04,593 INFO [RS:1;jenkins-hbase4:34959] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34959,1689963300038 2023-07-21 18:15:04,593 DEBUG [RS:1;jenkins-hbase4:34959] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6b99e8dc to 127.0.0.1:60536 2023-07-21 18:15:04,593 INFO [RS:0;jenkins-hbase4:45925] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 18:15:04,593 INFO [RS:0;jenkins-hbase4:45925] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 18:15:04,593 INFO [RS:0;jenkins-hbase4:45925] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45925,1689963299874 2023-07-21 18:15:04,593 DEBUG [RS:0;jenkins-hbase4:45925] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6642d7a7 to 127.0.0.1:60536 2023-07-21 18:15:04,593 DEBUG [RS:0;jenkins-hbase4:45925] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:15:04,593 INFO [RS:0;jenkins-hbase4:45925] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45925,1689963299874; all regions closed. 2023-07-21 18:15:04,593 DEBUG [RS:1;jenkins-hbase4:34959] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:15:04,593 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 00677051c5de5ea202c03eabf9ca4d7a, disabling compactions & flushes 2023-07-21 18:15:04,593 INFO [RS:1;jenkins-hbase4:34959] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 18:15:04,594 DEBUG [RS:1;jenkins-hbase4:34959] regionserver.HRegionServer(1478): Online Regions={00677051c5de5ea202c03eabf9ca4d7a=hbase:rsgroup,,1689963301721.00677051c5de5ea202c03eabf9ca4d7a.} 2023-07-21 18:15:04,593 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689963301721.00677051c5de5ea202c03eabf9ca4d7a. 2023-07-21 18:15:04,594 DEBUG [RS:1;jenkins-hbase4:34959] regionserver.HRegionServer(1504): Waiting on 00677051c5de5ea202c03eabf9ca4d7a 2023-07-21 18:15:04,594 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689963301721.00677051c5de5ea202c03eabf9ca4d7a. 2023-07-21 18:15:04,594 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689963301721.00677051c5de5ea202c03eabf9ca4d7a. after waiting 0 ms 2023-07-21 18:15:04,594 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689963301721.00677051c5de5ea202c03eabf9ca4d7a. 2023-07-21 18:15:04,594 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 00677051c5de5ea202c03eabf9ca4d7a 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-21 18:15:04,604 DEBUG [RS:3;jenkins-hbase4:39687] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/oldWALs 2023-07-21 18:15:04,604 INFO [RS:3;jenkins-hbase4:39687] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39687%2C1689963302296:(num 1689963302610) 2023-07-21 18:15:04,604 DEBUG [RS:3;jenkins-hbase4:39687] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:15:04,604 INFO [RS:3;jenkins-hbase4:39687] regionserver.LeaseManager(133): Closed leases 2023-07-21 18:15:04,605 DEBUG [RS:0;jenkins-hbase4:45925] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/oldWALs 2023-07-21 18:15:04,605 INFO [RS:0;jenkins-hbase4:45925] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45925%2C1689963299874:(num 1689963300992) 2023-07-21 18:15:04,605 DEBUG [RS:0;jenkins-hbase4:45925] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:15:04,606 INFO [RS:3;jenkins-hbase4:39687] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 18:15:04,606 INFO [RS:0;jenkins-hbase4:45925] regionserver.LeaseManager(133): Closed leases 2023-07-21 18:15:04,606 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 18:15:04,606 INFO [RS:3;jenkins-hbase4:39687] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 18:15:04,606 INFO [RS:3;jenkins-hbase4:39687] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 18:15:04,606 INFO [RS:3;jenkins-hbase4:39687] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 18:15:04,606 INFO [RS:0;jenkins-hbase4:45925] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 18:15:04,606 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 18:15:04,607 INFO [RS:0;jenkins-hbase4:45925] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 18:15:04,607 INFO [RS:0;jenkins-hbase4:45925] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 18:15:04,607 INFO [RS:0;jenkins-hbase4:45925] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 18:15:04,608 INFO [RS:3;jenkins-hbase4:39687] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39687 2023-07-21 18:15:04,610 INFO [RS:0;jenkins-hbase4:45925] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45925 2023-07-21 18:15:04,624 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 18:15:04,629 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/namespace/1debd766be32a30cd334b2093439b36e/.tmp/info/c2a5e6040819417b9699658485c8ef75 2023-07-21 18:15:04,629 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740/.tmp/info/089f64b5557f43d9b42dcd748acd536a 2023-07-21 18:15:04,637 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/rsgroup/00677051c5de5ea202c03eabf9ca4d7a/.tmp/m/4151229e2f70420db81d4141a1e53caa 2023-07-21 18:15:04,639 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 089f64b5557f43d9b42dcd748acd536a 2023-07-21 18:15:04,640 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c2a5e6040819417b9699658485c8ef75 2023-07-21 18:15:04,641 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/namespace/1debd766be32a30cd334b2093439b36e/.tmp/info/c2a5e6040819417b9699658485c8ef75 as hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/namespace/1debd766be32a30cd334b2093439b36e/info/c2a5e6040819417b9699658485c8ef75 2023-07-21 18:15:04,643 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4151229e2f70420db81d4141a1e53caa 2023-07-21 18:15:04,644 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/rsgroup/00677051c5de5ea202c03eabf9ca4d7a/.tmp/m/4151229e2f70420db81d4141a1e53caa as hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/rsgroup/00677051c5de5ea202c03eabf9ca4d7a/m/4151229e2f70420db81d4141a1e53caa 2023-07-21 18:15:04,648 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c2a5e6040819417b9699658485c8ef75 2023-07-21 18:15:04,648 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/namespace/1debd766be32a30cd334b2093439b36e/info/c2a5e6040819417b9699658485c8ef75, entries=3, sequenceid=9, filesize=5.0 K 2023-07-21 18:15:04,648 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 18:15:04,649 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 1debd766be32a30cd334b2093439b36e in 58ms, sequenceid=9, compaction requested=false 2023-07-21 18:15:04,651 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4151229e2f70420db81d4141a1e53caa 2023-07-21 18:15:04,651 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/rsgroup/00677051c5de5ea202c03eabf9ca4d7a/m/4151229e2f70420db81d4141a1e53caa, entries=12, sequenceid=29, filesize=5.4 K 2023-07-21 18:15:04,652 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for 00677051c5de5ea202c03eabf9ca4d7a in 58ms, sequenceid=29, compaction requested=false 2023-07-21 18:15:04,655 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740/.tmp/rep_barrier/c6978382eefd4832a30f4bf420cbb9bb 2023-07-21 18:15:04,664 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c6978382eefd4832a30f4bf420cbb9bb 2023-07-21 18:15:04,664 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/rsgroup/00677051c5de5ea202c03eabf9ca4d7a/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-21 18:15:04,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 18:15:04,665 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689963301721.00677051c5de5ea202c03eabf9ca4d7a. 2023-07-21 18:15:04,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 00677051c5de5ea202c03eabf9ca4d7a: 2023-07-21 18:15:04,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689963301721.00677051c5de5ea202c03eabf9ca4d7a. 2023-07-21 18:15:04,669 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/namespace/1debd766be32a30cd334b2093439b36e/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-21 18:15:04,669 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 18:15:04,670 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689963301298.1debd766be32a30cd334b2093439b36e. 2023-07-21 18:15:04,670 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1debd766be32a30cd334b2093439b36e: 2023-07-21 18:15:04,670 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689963301298.1debd766be32a30cd334b2093439b36e. 2023-07-21 18:15:04,682 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740/.tmp/table/52b29d88bb61428ea094b6298306e193 2023-07-21 18:15:04,687 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 52b29d88bb61428ea094b6298306e193 2023-07-21 18:15:04,688 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740/.tmp/info/089f64b5557f43d9b42dcd748acd536a as hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740/info/089f64b5557f43d9b42dcd748acd536a 2023-07-21 18:15:04,693 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 089f64b5557f43d9b42dcd748acd536a 2023-07-21 18:15:04,693 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740/info/089f64b5557f43d9b42dcd748acd536a, entries=22, sequenceid=26, filesize=7.3 K 2023-07-21 18:15:04,694 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740/.tmp/rep_barrier/c6978382eefd4832a30f4bf420cbb9bb as hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740/rep_barrier/c6978382eefd4832a30f4bf420cbb9bb 2023-07-21 18:15:04,695 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:44645-0x101891806670003, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45925,1689963299874 2023-07-21 18:15:04,695 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:34959-0x101891806670002, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45925,1689963299874 2023-07-21 18:15:04,695 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:44645-0x101891806670003, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:15:04,695 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:15:04,695 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:45925-0x101891806670001, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45925,1689963299874 2023-07-21 18:15:04,695 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:39687-0x10189180667000b, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45925,1689963299874 2023-07-21 18:15:04,695 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:45925-0x101891806670001, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:15:04,695 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:44645-0x101891806670003, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39687,1689963302296 2023-07-21 18:15:04,695 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:34959-0x101891806670002, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:15:04,695 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:45925-0x101891806670001, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39687,1689963302296 2023-07-21 18:15:04,695 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:34959-0x101891806670002, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39687,1689963302296 2023-07-21 18:15:04,695 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:39687-0x10189180667000b, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:15:04,695 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:39687-0x10189180667000b, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39687,1689963302296 2023-07-21 18:15:04,699 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,45925,1689963299874] 2023-07-21 18:15:04,699 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,45925,1689963299874; numProcessing=1 2023-07-21 18:15:04,700 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,45925,1689963299874 already deleted, retry=false 2023-07-21 18:15:04,700 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,45925,1689963299874 expired; onlineServers=3 2023-07-21 18:15:04,700 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39687,1689963302296] 2023-07-21 18:15:04,700 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39687,1689963302296; numProcessing=2 2023-07-21 18:15:04,700 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c6978382eefd4832a30f4bf420cbb9bb 2023-07-21 18:15:04,701 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740/rep_barrier/c6978382eefd4832a30f4bf420cbb9bb, entries=1, sequenceid=26, filesize=4.9 K 2023-07-21 18:15:04,701 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39687,1689963302296 already deleted, retry=false 2023-07-21 18:15:04,701 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39687,1689963302296 expired; onlineServers=2 2023-07-21 18:15:04,701 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740/.tmp/table/52b29d88bb61428ea094b6298306e193 as hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740/table/52b29d88bb61428ea094b6298306e193 2023-07-21 18:15:04,707 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 52b29d88bb61428ea094b6298306e193 2023-07-21 18:15:04,707 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740/table/52b29d88bb61428ea094b6298306e193, entries=6, sequenceid=26, filesize=5.1 K 2023-07-21 18:15:04,708 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 116ms, sequenceid=26, compaction requested=false 2023-07-21 18:15:04,718 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-21 18:15:04,719 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 18:15:04,719 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 18:15:04,719 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 18:15:04,719 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-21 18:15:04,792 INFO [RS:2;jenkins-hbase4:44645] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44645,1689963300199; all regions closed. 2023-07-21 18:15:04,794 INFO [RS:1;jenkins-hbase4:34959] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34959,1689963300038; all regions closed. 2023-07-21 18:15:04,801 DEBUG [RS:2;jenkins-hbase4:44645] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/oldWALs 2023-07-21 18:15:04,801 INFO [RS:2;jenkins-hbase4:44645] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44645%2C1689963300199.meta:.meta(num 1689963301226) 2023-07-21 18:15:04,801 DEBUG [RS:1;jenkins-hbase4:34959] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/oldWALs 2023-07-21 18:15:04,801 INFO [RS:1;jenkins-hbase4:34959] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34959%2C1689963300038:(num 1689963301036) 2023-07-21 18:15:04,801 DEBUG [RS:1;jenkins-hbase4:34959] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:15:04,801 INFO [RS:1;jenkins-hbase4:34959] regionserver.LeaseManager(133): Closed leases 2023-07-21 18:15:04,802 INFO [RS:1;jenkins-hbase4:34959] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 18:15:04,802 INFO [RS:1;jenkins-hbase4:34959] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 18:15:04,802 INFO [RS:1;jenkins-hbase4:34959] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 18:15:04,802 INFO [RS:1;jenkins-hbase4:34959] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 18:15:04,803 INFO [RS:1;jenkins-hbase4:34959] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34959 2023-07-21 18:15:04,803 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 18:15:04,806 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:15:04,806 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:44645-0x101891806670003, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34959,1689963300038 2023-07-21 18:15:04,806 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:34959-0x101891806670002, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34959,1689963300038 2023-07-21 18:15:04,806 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34959,1689963300038] 2023-07-21 18:15:04,807 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34959,1689963300038; numProcessing=3 2023-07-21 18:15:04,809 DEBUG [RS:2;jenkins-hbase4:44645] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/oldWALs 2023-07-21 18:15:04,809 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34959,1689963300038 already deleted, retry=false 2023-07-21 18:15:04,809 INFO [RS:2;jenkins-hbase4:44645] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44645%2C1689963300199:(num 1689963301009) 2023-07-21 18:15:04,809 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34959,1689963300038 expired; onlineServers=1 2023-07-21 18:15:04,809 DEBUG [RS:2;jenkins-hbase4:44645] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:15:04,809 INFO [RS:2;jenkins-hbase4:44645] regionserver.LeaseManager(133): Closed leases 2023-07-21 18:15:04,809 INFO [RS:2;jenkins-hbase4:44645] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 18:15:04,810 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 18:15:04,811 INFO [RS:2;jenkins-hbase4:44645] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44645 2023-07-21 18:15:04,813 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:44645-0x101891806670003, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44645,1689963300199 2023-07-21 18:15:04,813 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 18:15:04,814 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44645,1689963300199] 2023-07-21 18:15:04,814 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44645,1689963300199; numProcessing=4 2023-07-21 18:15:04,816 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44645,1689963300199 already deleted, retry=false 2023-07-21 18:15:04,816 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44645,1689963300199 expired; onlineServers=0 2023-07-21 18:15:04,816 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34779,1689963299707' ***** 2023-07-21 18:15:04,816 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-21 18:15:04,817 DEBUG [M:0;jenkins-hbase4:34779] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3c4e553a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 18:15:04,817 INFO [M:0;jenkins-hbase4:34779] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 18:15:04,819 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-21 18:15:04,819 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 18:15:04,819 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 18:15:04,820 INFO [M:0;jenkins-hbase4:34779] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2ae61d88{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 18:15:04,820 INFO [M:0;jenkins-hbase4:34779] server.AbstractConnector(383): Stopped ServerConnector@1c0afd76{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 18:15:04,820 INFO [M:0;jenkins-hbase4:34779] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 18:15:04,821 INFO [M:0;jenkins-hbase4:34779] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3693f4a4{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 18:15:04,821 INFO [M:0;jenkins-hbase4:34779] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7db7184a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/hadoop.log.dir/,STOPPED} 2023-07-21 18:15:04,822 INFO [M:0;jenkins-hbase4:34779] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34779,1689963299707 2023-07-21 18:15:04,822 INFO [M:0;jenkins-hbase4:34779] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34779,1689963299707; all regions closed. 2023-07-21 18:15:04,822 DEBUG [M:0;jenkins-hbase4:34779] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 18:15:04,822 INFO [M:0;jenkins-hbase4:34779] master.HMaster(1491): Stopping master jetty server 2023-07-21 18:15:04,823 INFO [M:0;jenkins-hbase4:34779] server.AbstractConnector(383): Stopped ServerConnector@7f773764{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 18:15:04,823 DEBUG [M:0;jenkins-hbase4:34779] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-21 18:15:04,823 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-21 18:15:04,823 DEBUG [M:0;jenkins-hbase4:34779] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-21 18:15:04,823 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689963300640] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689963300640,5,FailOnTimeoutGroup] 2023-07-21 18:15:04,823 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689963300640] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689963300640,5,FailOnTimeoutGroup] 2023-07-21 18:15:04,823 INFO [M:0;jenkins-hbase4:34779] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-21 18:15:04,823 INFO [M:0;jenkins-hbase4:34779] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-21 18:15:04,824 INFO [M:0;jenkins-hbase4:34779] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-21 18:15:04,824 DEBUG [M:0;jenkins-hbase4:34779] master.HMaster(1512): Stopping service threads 2023-07-21 18:15:04,824 INFO [M:0;jenkins-hbase4:34779] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-21 18:15:04,824 ERROR [M:0;jenkins-hbase4:34779] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-21 18:15:04,824 INFO [M:0;jenkins-hbase4:34779] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-21 18:15:04,824 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-21 18:15:04,824 DEBUG [M:0;jenkins-hbase4:34779] zookeeper.ZKUtil(398): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-21 18:15:04,825 WARN [M:0;jenkins-hbase4:34779] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-21 18:15:04,825 INFO [M:0;jenkins-hbase4:34779] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-21 18:15:04,825 INFO [M:0;jenkins-hbase4:34779] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-21 18:15:04,825 DEBUG [M:0;jenkins-hbase4:34779] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 18:15:04,825 INFO [M:0;jenkins-hbase4:34779] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 18:15:04,825 DEBUG [M:0;jenkins-hbase4:34779] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 18:15:04,825 DEBUG [M:0;jenkins-hbase4:34779] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 18:15:04,825 DEBUG [M:0;jenkins-hbase4:34779] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 18:15:04,825 INFO [M:0;jenkins-hbase4:34779] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.17 KB heapSize=90.62 KB 2023-07-21 18:15:04,839 INFO [M:0;jenkins-hbase4:34779] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.17 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/e261908863644761b83cd4eef72d0666 2023-07-21 18:15:04,845 DEBUG [M:0;jenkins-hbase4:34779] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/e261908863644761b83cd4eef72d0666 as hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e261908863644761b83cd4eef72d0666 2023-07-21 18:15:04,849 INFO [M:0;jenkins-hbase4:34779] regionserver.HStore(1080): Added hdfs://localhost:34709/user/jenkins/test-data/6e985a62-96ad-4878-b6e7-9c83b99d89c1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e261908863644761b83cd4eef72d0666, entries=22, sequenceid=175, filesize=11.1 K 2023-07-21 18:15:04,850 INFO [M:0;jenkins-hbase4:34779] regionserver.HRegion(2948): Finished flush of dataSize ~76.17 KB/78001, heapSize ~90.60 KB/92776, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 25ms, sequenceid=175, compaction requested=false 2023-07-21 18:15:04,852 INFO [M:0;jenkins-hbase4:34779] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 18:15:04,852 DEBUG [M:0;jenkins-hbase4:34779] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 18:15:04,855 INFO [M:0;jenkins-hbase4:34779] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-21 18:15:04,855 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 18:15:04,855 INFO [M:0;jenkins-hbase4:34779] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34779 2023-07-21 18:15:04,857 DEBUG [M:0;jenkins-hbase4:34779] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,34779,1689963299707 already deleted, retry=false 2023-07-21 18:15:05,071 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:15:05,071 INFO [M:0;jenkins-hbase4:34779] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34779,1689963299707; zookeeper connection closed. 2023-07-21 18:15:05,071 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): master:34779-0x101891806670000, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:15:05,171 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:44645-0x101891806670003, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:15:05,171 INFO [RS:2;jenkins-hbase4:44645] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44645,1689963300199; zookeeper connection closed. 2023-07-21 18:15:05,171 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:44645-0x101891806670003, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:15:05,172 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2a47f1f8] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2a47f1f8 2023-07-21 18:15:05,272 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:34959-0x101891806670002, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:15:05,272 INFO [RS:1;jenkins-hbase4:34959] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34959,1689963300038; zookeeper connection closed. 2023-07-21 18:15:05,272 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:34959-0x101891806670002, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:15:05,272 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@48f81f7d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@48f81f7d 2023-07-21 18:15:05,372 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:39687-0x10189180667000b, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:15:05,372 INFO [RS:3;jenkins-hbase4:39687] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39687,1689963302296; zookeeper connection closed. 2023-07-21 18:15:05,372 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:39687-0x10189180667000b, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:15:05,372 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@264ad3] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@264ad3 2023-07-21 18:15:05,472 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:45925-0x101891806670001, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:15:05,472 INFO [RS:0;jenkins-hbase4:45925] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45925,1689963299874; zookeeper connection closed. 2023-07-21 18:15:05,472 DEBUG [Listener at localhost/43809-EventThread] zookeeper.ZKWatcher(600): regionserver:45925-0x101891806670001, quorum=127.0.0.1:60536, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 18:15:05,473 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1c87ff17] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1c87ff17 2023-07-21 18:15:05,473 INFO [Listener at localhost/43809] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-21 18:15:05,473 WARN [Listener at localhost/43809] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 18:15:05,477 INFO [Listener at localhost/43809] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 18:15:05,581 WARN [BP-1517975358-172.31.14.131-1689963298839 heartbeating to localhost/127.0.0.1:34709] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 18:15:05,581 WARN [BP-1517975358-172.31.14.131-1689963298839 heartbeating to localhost/127.0.0.1:34709] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1517975358-172.31.14.131-1689963298839 (Datanode Uuid 545714ce-165b-4ec1-866e-18fe448c2a40) service to localhost/127.0.0.1:34709 2023-07-21 18:15:05,582 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/cluster_f875a085-4694-55b9-cdf9-eae6f0017a24/dfs/data/data5/current/BP-1517975358-172.31.14.131-1689963298839] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 18:15:05,582 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/cluster_f875a085-4694-55b9-cdf9-eae6f0017a24/dfs/data/data6/current/BP-1517975358-172.31.14.131-1689963298839] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 18:15:05,583 WARN [Listener at localhost/43809] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 18:15:05,586 INFO [Listener at localhost/43809] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 18:15:05,690 WARN [BP-1517975358-172.31.14.131-1689963298839 heartbeating to localhost/127.0.0.1:34709] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 18:15:05,690 WARN [BP-1517975358-172.31.14.131-1689963298839 heartbeating to localhost/127.0.0.1:34709] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1517975358-172.31.14.131-1689963298839 (Datanode Uuid 018ddf14-dd77-49b8-9d78-1260d389af29) service to localhost/127.0.0.1:34709 2023-07-21 18:15:05,691 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/cluster_f875a085-4694-55b9-cdf9-eae6f0017a24/dfs/data/data3/current/BP-1517975358-172.31.14.131-1689963298839] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 18:15:05,691 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/cluster_f875a085-4694-55b9-cdf9-eae6f0017a24/dfs/data/data4/current/BP-1517975358-172.31.14.131-1689963298839] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 18:15:05,692 WARN [Listener at localhost/43809] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 18:15:05,695 INFO [Listener at localhost/43809] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 18:15:05,798 WARN [BP-1517975358-172.31.14.131-1689963298839 heartbeating to localhost/127.0.0.1:34709] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 18:15:05,798 WARN [BP-1517975358-172.31.14.131-1689963298839 heartbeating to localhost/127.0.0.1:34709] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1517975358-172.31.14.131-1689963298839 (Datanode Uuid dcc10afb-0f4c-4d50-81f9-ca2417f2128b) service to localhost/127.0.0.1:34709 2023-07-21 18:15:05,799 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/cluster_f875a085-4694-55b9-cdf9-eae6f0017a24/dfs/data/data1/current/BP-1517975358-172.31.14.131-1689963298839] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 18:15:05,799 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6a66f9c4-4618-1f6e-697b-6d5c1ffa7340/cluster_f875a085-4694-55b9-cdf9-eae6f0017a24/dfs/data/data2/current/BP-1517975358-172.31.14.131-1689963298839] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 18:15:05,809 INFO [Listener at localhost/43809] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 18:15:05,923 INFO [Listener at localhost/43809] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-21 18:15:05,948 INFO [Listener at localhost/43809] hbase.HBaseTestingUtility(1293): Minicluster is down