2023-07-24 20:10:24,811 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6 2023-07-24 20:10:24,830 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-24 20:10:24,855 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-24 20:10:24,855 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/cluster_c9a4a1de-e8c8-db94-f8a1-60d638b30590, deleteOnExit=true 2023-07-24 20:10:24,856 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-24 20:10:24,856 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/test.cache.data in system properties and HBase conf 2023-07-24 20:10:24,857 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/hadoop.tmp.dir in system properties and HBase conf 2023-07-24 20:10:24,858 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/hadoop.log.dir in system properties and HBase conf 2023-07-24 20:10:24,858 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-24 20:10:24,859 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-24 20:10:24,859 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-24 20:10:25,013 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-24 20:10:25,461 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-24 20:10:25,466 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-24 20:10:25,466 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-24 20:10:25,467 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-24 20:10:25,467 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 20:10:25,467 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-24 20:10:25,467 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-24 20:10:25,468 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 20:10:25,468 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 20:10:25,469 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-24 20:10:25,469 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/nfs.dump.dir in system properties and HBase conf 2023-07-24 20:10:25,469 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/java.io.tmpdir in system properties and HBase conf 2023-07-24 20:10:25,470 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 20:10:25,470 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-24 20:10:25,471 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-24 20:10:26,025 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 20:10:26,031 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 20:10:26,398 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-24 20:10:26,609 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-24 20:10:26,626 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 20:10:26,665 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 20:10:26,707 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/java.io.tmpdir/Jetty_localhost_36103_hdfs____duobbv/webapp 2023-07-24 20:10:26,860 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36103 2023-07-24 20:10:26,901 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 20:10:26,901 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 20:10:27,349 WARN [Listener at localhost/32791] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 20:10:27,420 WARN [Listener at localhost/32791] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 20:10:27,438 WARN [Listener at localhost/32791] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 20:10:27,444 INFO [Listener at localhost/32791] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 20:10:27,450 INFO [Listener at localhost/32791] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/java.io.tmpdir/Jetty_localhost_33489_datanode____pmmse4/webapp 2023-07-24 20:10:27,573 INFO [Listener at localhost/32791] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33489 2023-07-24 20:10:28,026 WARN [Listener at localhost/35949] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 20:10:28,076 WARN [Listener at localhost/35949] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 20:10:28,086 WARN [Listener at localhost/35949] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 20:10:28,089 INFO [Listener at localhost/35949] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 20:10:28,098 INFO [Listener at localhost/35949] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/java.io.tmpdir/Jetty_localhost_32871_datanode____.w41jye/webapp 2023-07-24 20:10:28,203 INFO [Listener at localhost/35949] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:32871 2023-07-24 20:10:28,217 WARN [Listener at localhost/34277] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 20:10:28,236 WARN [Listener at localhost/34277] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 20:10:28,239 WARN [Listener at localhost/34277] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 20:10:28,241 INFO [Listener at localhost/34277] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 20:10:28,247 INFO [Listener at localhost/34277] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/java.io.tmpdir/Jetty_localhost_46583_datanode____5p18gl/webapp 2023-07-24 20:10:28,368 INFO [Listener at localhost/34277] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46583 2023-07-24 20:10:28,380 WARN [Listener at localhost/39801] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 20:10:28,597 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9cf5ce7a36203c2b: Processing first storage report for DS-4c2bb9cf-42f6-4e01-8e40-6d0e320e306a from datanode f9613ce8-8e43-460f-a070-d19e8d6e1e0b 2023-07-24 20:10:28,599 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9cf5ce7a36203c2b: from storage DS-4c2bb9cf-42f6-4e01-8e40-6d0e320e306a node DatanodeRegistration(127.0.0.1:33397, datanodeUuid=f9613ce8-8e43-460f-a070-d19e8d6e1e0b, infoPort=41861, infoSecurePort=0, ipcPort=34277, storageInfo=lv=-57;cid=testClusterID;nsid=1592377879;c=1690229426124), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-24 20:10:28,599 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf5654fb8f59f3249: Processing first storage report for DS-c1ea1d19-77ac-4821-abab-73633b2c9332 from datanode 03c59458-cfbb-467e-87d9-72ed36a77cf2 2023-07-24 20:10:28,599 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf5654fb8f59f3249: from storage DS-c1ea1d19-77ac-4821-abab-73633b2c9332 node DatanodeRegistration(127.0.0.1:37373, datanodeUuid=03c59458-cfbb-467e-87d9-72ed36a77cf2, infoPort=35453, infoSecurePort=0, ipcPort=35949, storageInfo=lv=-57;cid=testClusterID;nsid=1592377879;c=1690229426124), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 20:10:28,600 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x73913f71e5a1c8f: Processing first storage report for DS-4d145b76-665c-4122-a267-81459a7dcebb from datanode 95ebb679-b89d-4261-bbcf-547852e4613b 2023-07-24 20:10:28,600 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x73913f71e5a1c8f: from storage DS-4d145b76-665c-4122-a267-81459a7dcebb node DatanodeRegistration(127.0.0.1:41925, datanodeUuid=95ebb679-b89d-4261-bbcf-547852e4613b, infoPort=45339, infoSecurePort=0, ipcPort=39801, storageInfo=lv=-57;cid=testClusterID;nsid=1592377879;c=1690229426124), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 20:10:28,600 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9cf5ce7a36203c2b: Processing first storage report for DS-afe38441-ebd6-448c-8ddd-dab298da424e from datanode f9613ce8-8e43-460f-a070-d19e8d6e1e0b 2023-07-24 20:10:28,600 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9cf5ce7a36203c2b: from storage DS-afe38441-ebd6-448c-8ddd-dab298da424e node DatanodeRegistration(127.0.0.1:33397, datanodeUuid=f9613ce8-8e43-460f-a070-d19e8d6e1e0b, infoPort=41861, infoSecurePort=0, ipcPort=34277, storageInfo=lv=-57;cid=testClusterID;nsid=1592377879;c=1690229426124), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 20:10:28,600 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf5654fb8f59f3249: Processing first storage report for DS-1c27827f-fdc3-49c9-ab24-2e19dd55d641 from datanode 03c59458-cfbb-467e-87d9-72ed36a77cf2 2023-07-24 20:10:28,601 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf5654fb8f59f3249: from storage DS-1c27827f-fdc3-49c9-ab24-2e19dd55d641 node DatanodeRegistration(127.0.0.1:37373, datanodeUuid=03c59458-cfbb-467e-87d9-72ed36a77cf2, infoPort=35453, infoSecurePort=0, ipcPort=35949, storageInfo=lv=-57;cid=testClusterID;nsid=1592377879;c=1690229426124), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-24 20:10:28,601 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x73913f71e5a1c8f: Processing first storage report for DS-4ed6ee88-cadc-4f1f-8a05-1591c82c9f44 from datanode 95ebb679-b89d-4261-bbcf-547852e4613b 2023-07-24 20:10:28,601 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x73913f71e5a1c8f: from storage DS-4ed6ee88-cadc-4f1f-8a05-1591c82c9f44 node DatanodeRegistration(127.0.0.1:41925, datanodeUuid=95ebb679-b89d-4261-bbcf-547852e4613b, infoPort=45339, infoSecurePort=0, ipcPort=39801, storageInfo=lv=-57;cid=testClusterID;nsid=1592377879;c=1690229426124), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 20:10:28,835 DEBUG [Listener at localhost/39801] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6 2023-07-24 20:10:28,952 INFO [Listener at localhost/39801] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/cluster_c9a4a1de-e8c8-db94-f8a1-60d638b30590/zookeeper_0, clientPort=52127, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/cluster_c9a4a1de-e8c8-db94-f8a1-60d638b30590/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/cluster_c9a4a1de-e8c8-db94-f8a1-60d638b30590/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-24 20:10:28,971 INFO [Listener at localhost/39801] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=52127 2023-07-24 20:10:28,979 INFO [Listener at localhost/39801] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:10:28,982 INFO [Listener at localhost/39801] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:10:29,697 INFO [Listener at localhost/39801] util.FSUtils(471): Created version file at hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1 with version=8 2023-07-24 20:10:29,697 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/hbase-staging 2023-07-24 20:10:29,709 DEBUG [Listener at localhost/39801] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-24 20:10:29,710 DEBUG [Listener at localhost/39801] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-24 20:10:29,710 DEBUG [Listener at localhost/39801] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-24 20:10:29,710 DEBUG [Listener at localhost/39801] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-24 20:10:30,141 INFO [Listener at localhost/39801] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-24 20:10:30,781 INFO [Listener at localhost/39801] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 20:10:30,827 INFO [Listener at localhost/39801] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:30,828 INFO [Listener at localhost/39801] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:30,828 INFO [Listener at localhost/39801] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 20:10:30,828 INFO [Listener at localhost/39801] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:30,829 INFO [Listener at localhost/39801] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 20:10:30,989 INFO [Listener at localhost/39801] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 20:10:31,089 DEBUG [Listener at localhost/39801] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-24 20:10:31,191 INFO [Listener at localhost/39801] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43005 2023-07-24 20:10:31,203 INFO [Listener at localhost/39801] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:10:31,205 INFO [Listener at localhost/39801] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:10:31,228 INFO [Listener at localhost/39801] zookeeper.RecoverableZooKeeper(93): Process identifier=master:43005 connecting to ZooKeeper ensemble=127.0.0.1:52127 2023-07-24 20:10:31,282 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:430050x0, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 20:10:31,285 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:43005-0x10198f4d5ba0000 connected 2023-07-24 20:10:31,319 DEBUG [Listener at localhost/39801] zookeeper.ZKUtil(164): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 20:10:31,320 DEBUG [Listener at localhost/39801] zookeeper.ZKUtil(164): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 20:10:31,324 DEBUG [Listener at localhost/39801] zookeeper.ZKUtil(164): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 20:10:31,335 DEBUG [Listener at localhost/39801] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43005 2023-07-24 20:10:31,335 DEBUG [Listener at localhost/39801] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43005 2023-07-24 20:10:31,337 DEBUG [Listener at localhost/39801] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43005 2023-07-24 20:10:31,337 DEBUG [Listener at localhost/39801] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43005 2023-07-24 20:10:31,338 DEBUG [Listener at localhost/39801] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43005 2023-07-24 20:10:31,379 INFO [Listener at localhost/39801] log.Log(170): Logging initialized @7333ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-24 20:10:31,529 INFO [Listener at localhost/39801] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 20:10:31,530 INFO [Listener at localhost/39801] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 20:10:31,531 INFO [Listener at localhost/39801] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 20:10:31,533 INFO [Listener at localhost/39801] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-24 20:10:31,533 INFO [Listener at localhost/39801] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 20:10:31,533 INFO [Listener at localhost/39801] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 20:10:31,537 INFO [Listener at localhost/39801] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 20:10:31,607 INFO [Listener at localhost/39801] http.HttpServer(1146): Jetty bound to port 34897 2023-07-24 20:10:31,610 INFO [Listener at localhost/39801] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 20:10:31,660 INFO [Listener at localhost/39801] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:31,663 INFO [Listener at localhost/39801] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@40c11d31{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/hadoop.log.dir/,AVAILABLE} 2023-07-24 20:10:31,664 INFO [Listener at localhost/39801] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:31,664 INFO [Listener at localhost/39801] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@69d623f{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 20:10:31,739 INFO [Listener at localhost/39801] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 20:10:31,758 INFO [Listener at localhost/39801] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 20:10:31,759 INFO [Listener at localhost/39801] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 20:10:31,761 INFO [Listener at localhost/39801] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 20:10:31,768 INFO [Listener at localhost/39801] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:31,797 INFO [Listener at localhost/39801] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@60d47226{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-24 20:10:31,811 INFO [Listener at localhost/39801] server.AbstractConnector(333): Started ServerConnector@7ab055ba{HTTP/1.1, (http/1.1)}{0.0.0.0:34897} 2023-07-24 20:10:31,811 INFO [Listener at localhost/39801] server.Server(415): Started @7765ms 2023-07-24 20:10:31,814 INFO [Listener at localhost/39801] master.HMaster(444): hbase.rootdir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1, hbase.cluster.distributed=false 2023-07-24 20:10:31,902 INFO [Listener at localhost/39801] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 20:10:31,903 INFO [Listener at localhost/39801] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:31,903 INFO [Listener at localhost/39801] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:31,903 INFO [Listener at localhost/39801] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 20:10:31,903 INFO [Listener at localhost/39801] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:31,903 INFO [Listener at localhost/39801] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 20:10:31,913 INFO [Listener at localhost/39801] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 20:10:31,918 INFO [Listener at localhost/39801] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36167 2023-07-24 20:10:31,921 INFO [Listener at localhost/39801] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 20:10:31,930 DEBUG [Listener at localhost/39801] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 20:10:31,932 INFO [Listener at localhost/39801] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:10:31,935 INFO [Listener at localhost/39801] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:10:31,937 INFO [Listener at localhost/39801] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36167 connecting to ZooKeeper ensemble=127.0.0.1:52127 2023-07-24 20:10:31,942 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:361670x0, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 20:10:31,944 DEBUG [Listener at localhost/39801] zookeeper.ZKUtil(164): regionserver:361670x0, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 20:10:31,945 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36167-0x10198f4d5ba0001 connected 2023-07-24 20:10:31,946 DEBUG [Listener at localhost/39801] zookeeper.ZKUtil(164): regionserver:36167-0x10198f4d5ba0001, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 20:10:31,948 DEBUG [Listener at localhost/39801] zookeeper.ZKUtil(164): regionserver:36167-0x10198f4d5ba0001, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 20:10:31,954 DEBUG [Listener at localhost/39801] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36167 2023-07-24 20:10:31,955 DEBUG [Listener at localhost/39801] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36167 2023-07-24 20:10:31,957 DEBUG [Listener at localhost/39801] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36167 2023-07-24 20:10:31,961 DEBUG [Listener at localhost/39801] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36167 2023-07-24 20:10:31,962 DEBUG [Listener at localhost/39801] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36167 2023-07-24 20:10:31,966 INFO [Listener at localhost/39801] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 20:10:31,966 INFO [Listener at localhost/39801] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 20:10:31,966 INFO [Listener at localhost/39801] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 20:10:31,969 INFO [Listener at localhost/39801] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 20:10:31,969 INFO [Listener at localhost/39801] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 20:10:31,969 INFO [Listener at localhost/39801] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 20:10:31,970 INFO [Listener at localhost/39801] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 20:10:31,974 INFO [Listener at localhost/39801] http.HttpServer(1146): Jetty bound to port 38315 2023-07-24 20:10:31,974 INFO [Listener at localhost/39801] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 20:10:31,981 INFO [Listener at localhost/39801] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:31,981 INFO [Listener at localhost/39801] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@c83b084{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/hadoop.log.dir/,AVAILABLE} 2023-07-24 20:10:31,982 INFO [Listener at localhost/39801] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:31,982 INFO [Listener at localhost/39801] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3b5a29b4{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 20:10:31,998 INFO [Listener at localhost/39801] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 20:10:32,000 INFO [Listener at localhost/39801] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 20:10:32,001 INFO [Listener at localhost/39801] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 20:10:32,001 INFO [Listener at localhost/39801] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 20:10:32,003 INFO [Listener at localhost/39801] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:32,008 INFO [Listener at localhost/39801] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@f895154{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 20:10:32,010 INFO [Listener at localhost/39801] server.AbstractConnector(333): Started ServerConnector@794438b4{HTTP/1.1, (http/1.1)}{0.0.0.0:38315} 2023-07-24 20:10:32,010 INFO [Listener at localhost/39801] server.Server(415): Started @7964ms 2023-07-24 20:10:32,027 INFO [Listener at localhost/39801] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 20:10:32,027 INFO [Listener at localhost/39801] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:32,028 INFO [Listener at localhost/39801] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:32,028 INFO [Listener at localhost/39801] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 20:10:32,028 INFO [Listener at localhost/39801] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:32,029 INFO [Listener at localhost/39801] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 20:10:32,029 INFO [Listener at localhost/39801] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 20:10:32,031 INFO [Listener at localhost/39801] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45007 2023-07-24 20:10:32,032 INFO [Listener at localhost/39801] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 20:10:32,034 DEBUG [Listener at localhost/39801] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 20:10:32,035 INFO [Listener at localhost/39801] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:10:32,037 INFO [Listener at localhost/39801] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:10:32,038 INFO [Listener at localhost/39801] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45007 connecting to ZooKeeper ensemble=127.0.0.1:52127 2023-07-24 20:10:32,044 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:450070x0, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 20:10:32,045 DEBUG [Listener at localhost/39801] zookeeper.ZKUtil(164): regionserver:450070x0, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 20:10:32,045 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45007-0x10198f4d5ba0002 connected 2023-07-24 20:10:32,046 DEBUG [Listener at localhost/39801] zookeeper.ZKUtil(164): regionserver:45007-0x10198f4d5ba0002, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 20:10:32,047 DEBUG [Listener at localhost/39801] zookeeper.ZKUtil(164): regionserver:45007-0x10198f4d5ba0002, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 20:10:32,048 DEBUG [Listener at localhost/39801] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45007 2023-07-24 20:10:32,050 DEBUG [Listener at localhost/39801] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45007 2023-07-24 20:10:32,050 DEBUG [Listener at localhost/39801] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45007 2023-07-24 20:10:32,052 DEBUG [Listener at localhost/39801] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45007 2023-07-24 20:10:32,055 DEBUG [Listener at localhost/39801] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45007 2023-07-24 20:10:32,058 INFO [Listener at localhost/39801] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 20:10:32,058 INFO [Listener at localhost/39801] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 20:10:32,058 INFO [Listener at localhost/39801] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 20:10:32,059 INFO [Listener at localhost/39801] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 20:10:32,059 INFO [Listener at localhost/39801] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 20:10:32,059 INFO [Listener at localhost/39801] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 20:10:32,059 INFO [Listener at localhost/39801] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 20:10:32,060 INFO [Listener at localhost/39801] http.HttpServer(1146): Jetty bound to port 35449 2023-07-24 20:10:32,061 INFO [Listener at localhost/39801] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 20:10:32,074 INFO [Listener at localhost/39801] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:32,074 INFO [Listener at localhost/39801] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@28e4d5e5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/hadoop.log.dir/,AVAILABLE} 2023-07-24 20:10:32,075 INFO [Listener at localhost/39801] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:32,075 INFO [Listener at localhost/39801] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6eb4fc00{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 20:10:32,088 INFO [Listener at localhost/39801] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 20:10:32,089 INFO [Listener at localhost/39801] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 20:10:32,090 INFO [Listener at localhost/39801] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 20:10:32,090 INFO [Listener at localhost/39801] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 20:10:32,093 INFO [Listener at localhost/39801] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:32,094 INFO [Listener at localhost/39801] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2e92a8f{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 20:10:32,095 INFO [Listener at localhost/39801] server.AbstractConnector(333): Started ServerConnector@5761f63f{HTTP/1.1, (http/1.1)}{0.0.0.0:35449} 2023-07-24 20:10:32,096 INFO [Listener at localhost/39801] server.Server(415): Started @8050ms 2023-07-24 20:10:32,114 INFO [Listener at localhost/39801] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 20:10:32,114 INFO [Listener at localhost/39801] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:32,115 INFO [Listener at localhost/39801] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:32,115 INFO [Listener at localhost/39801] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 20:10:32,115 INFO [Listener at localhost/39801] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:32,115 INFO [Listener at localhost/39801] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 20:10:32,115 INFO [Listener at localhost/39801] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 20:10:32,118 INFO [Listener at localhost/39801] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43675 2023-07-24 20:10:32,118 INFO [Listener at localhost/39801] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 20:10:32,125 DEBUG [Listener at localhost/39801] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 20:10:32,127 INFO [Listener at localhost/39801] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:10:32,128 INFO [Listener at localhost/39801] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:10:32,129 INFO [Listener at localhost/39801] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43675 connecting to ZooKeeper ensemble=127.0.0.1:52127 2023-07-24 20:10:32,134 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:436750x0, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 20:10:32,135 DEBUG [Listener at localhost/39801] zookeeper.ZKUtil(164): regionserver:436750x0, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 20:10:32,135 DEBUG [Listener at localhost/39801] zookeeper.ZKUtil(164): regionserver:436750x0, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 20:10:32,136 DEBUG [Listener at localhost/39801] zookeeper.ZKUtil(164): regionserver:436750x0, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 20:10:32,138 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43675-0x10198f4d5ba0003 connected 2023-07-24 20:10:32,139 DEBUG [Listener at localhost/39801] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43675 2023-07-24 20:10:32,141 DEBUG [Listener at localhost/39801] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43675 2023-07-24 20:10:32,141 DEBUG [Listener at localhost/39801] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43675 2023-07-24 20:10:32,147 DEBUG [Listener at localhost/39801] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43675 2023-07-24 20:10:32,148 DEBUG [Listener at localhost/39801] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43675 2023-07-24 20:10:32,151 INFO [Listener at localhost/39801] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 20:10:32,151 INFO [Listener at localhost/39801] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 20:10:32,151 INFO [Listener at localhost/39801] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 20:10:32,152 INFO [Listener at localhost/39801] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 20:10:32,152 INFO [Listener at localhost/39801] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 20:10:32,152 INFO [Listener at localhost/39801] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 20:10:32,152 INFO [Listener at localhost/39801] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 20:10:32,153 INFO [Listener at localhost/39801] http.HttpServer(1146): Jetty bound to port 45029 2023-07-24 20:10:32,153 INFO [Listener at localhost/39801] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 20:10:32,158 INFO [Listener at localhost/39801] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:32,159 INFO [Listener at localhost/39801] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3f2f2029{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/hadoop.log.dir/,AVAILABLE} 2023-07-24 20:10:32,159 INFO [Listener at localhost/39801] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:32,159 INFO [Listener at localhost/39801] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7abf9a1c{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 20:10:32,168 INFO [Listener at localhost/39801] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 20:10:32,169 INFO [Listener at localhost/39801] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 20:10:32,169 INFO [Listener at localhost/39801] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 20:10:32,169 INFO [Listener at localhost/39801] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 20:10:32,170 INFO [Listener at localhost/39801] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:32,171 INFO [Listener at localhost/39801] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@87ade83{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 20:10:32,172 INFO [Listener at localhost/39801] server.AbstractConnector(333): Started ServerConnector@26c1a2f{HTTP/1.1, (http/1.1)}{0.0.0.0:45029} 2023-07-24 20:10:32,172 INFO [Listener at localhost/39801] server.Server(415): Started @8126ms 2023-07-24 20:10:32,179 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 20:10:32,184 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@6a0c70ce{HTTP/1.1, (http/1.1)}{0.0.0.0:35793} 2023-07-24 20:10:32,184 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8138ms 2023-07-24 20:10:32,184 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,43005,1690229429933 2023-07-24 20:10:32,197 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 20:10:32,199 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,43005,1690229429933 2023-07-24 20:10:32,218 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:36167-0x10198f4d5ba0001, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 20:10:32,218 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:45007-0x10198f4d5ba0002, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 20:10:32,218 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 20:10:32,218 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:43675-0x10198f4d5ba0003, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 20:10:32,219 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:10:32,220 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 20:10:32,222 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,43005,1690229429933 from backup master directory 2023-07-24 20:10:32,222 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 20:10:32,227 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,43005,1690229429933 2023-07-24 20:10:32,227 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 20:10:32,228 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 20:10:32,228 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,43005,1690229429933 2023-07-24 20:10:32,231 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-24 20:10:32,233 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-24 20:10:32,349 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/hbase.id with ID: 4747f820-11b4-4442-9950-46fdbe51817f 2023-07-24 20:10:32,401 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:10:32,422 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:10:32,495 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x39ea1c96 to 127.0.0.1:52127 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 20:10:32,534 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3354acbb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 20:10:32,566 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 20:10:32,568 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-24 20:10:32,591 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-24 20:10:32,591 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-24 20:10:32,594 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-24 20:10:32,599 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-24 20:10:32,600 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 20:10:32,644 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/MasterData/data/master/store-tmp 2023-07-24 20:10:32,694 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:32,694 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 20:10:32,695 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 20:10:32,695 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 20:10:32,695 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 20:10:32,695 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 20:10:32,695 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 20:10:32,695 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 20:10:32,697 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/MasterData/WALs/jenkins-hbase4.apache.org,43005,1690229429933 2023-07-24 20:10:32,722 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43005%2C1690229429933, suffix=, logDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/MasterData/WALs/jenkins-hbase4.apache.org,43005,1690229429933, archiveDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/MasterData/oldWALs, maxLogs=10 2023-07-24 20:10:32,784 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41925,DS-4d145b76-665c-4122-a267-81459a7dcebb,DISK] 2023-07-24 20:10:32,784 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33397,DS-4c2bb9cf-42f6-4e01-8e40-6d0e320e306a,DISK] 2023-07-24 20:10:32,784 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37373,DS-c1ea1d19-77ac-4821-abab-73633b2c9332,DISK] 2023-07-24 20:10:32,793 DEBUG [RS-EventLoopGroup-5-3] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 20:10:32,893 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/MasterData/WALs/jenkins-hbase4.apache.org,43005,1690229429933/jenkins-hbase4.apache.org%2C43005%2C1690229429933.1690229432733 2023-07-24 20:10:32,893 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33397,DS-4c2bb9cf-42f6-4e01-8e40-6d0e320e306a,DISK], DatanodeInfoWithStorage[127.0.0.1:37373,DS-c1ea1d19-77ac-4821-abab-73633b2c9332,DISK], DatanodeInfoWithStorage[127.0.0.1:41925,DS-4d145b76-665c-4122-a267-81459a7dcebb,DISK]] 2023-07-24 20:10:32,894 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:10:32,895 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:32,900 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 20:10:32,902 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 20:10:33,005 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-24 20:10:33,014 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-24 20:10:33,057 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-24 20:10:33,074 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:33,080 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 20:10:33,082 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 20:10:33,108 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 20:10:33,113 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:10:33,114 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11744991040, jitterRate=0.09383752942085266}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:33,115 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 20:10:33,116 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-24 20:10:33,145 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-24 20:10:33,145 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-24 20:10:33,148 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-24 20:10:33,150 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-24 20:10:33,205 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 54 msec 2023-07-24 20:10:33,205 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-24 20:10:33,239 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-24 20:10:33,246 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-24 20:10:33,256 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-24 20:10:33,265 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-24 20:10:33,274 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-24 20:10:33,277 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:10:33,278 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-24 20:10:33,279 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-24 20:10:33,296 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-24 20:10:33,301 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:36167-0x10198f4d5ba0001, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 20:10:33,301 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 20:10:33,301 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:43675-0x10198f4d5ba0003, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 20:10:33,301 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:45007-0x10198f4d5ba0002, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 20:10:33,301 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:10:33,302 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,43005,1690229429933, sessionid=0x10198f4d5ba0000, setting cluster-up flag (Was=false) 2023-07-24 20:10:33,325 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:10:33,331 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-24 20:10:33,332 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,43005,1690229429933 2023-07-24 20:10:33,344 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:10:33,356 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-24 20:10:33,357 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,43005,1690229429933 2023-07-24 20:10:33,360 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.hbase-snapshot/.tmp 2023-07-24 20:10:33,395 INFO [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer(951): ClusterId : 4747f820-11b4-4442-9950-46fdbe51817f 2023-07-24 20:10:33,420 DEBUG [RS:1;jenkins-hbase4:45007] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 20:10:33,423 INFO [RS:0;jenkins-hbase4:36167] regionserver.HRegionServer(951): ClusterId : 4747f820-11b4-4442-9950-46fdbe51817f 2023-07-24 20:10:33,424 DEBUG [RS:0;jenkins-hbase4:36167] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 20:10:33,428 INFO [RS:2;jenkins-hbase4:43675] regionserver.HRegionServer(951): ClusterId : 4747f820-11b4-4442-9950-46fdbe51817f 2023-07-24 20:10:33,429 DEBUG [RS:2;jenkins-hbase4:43675] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 20:10:33,431 DEBUG [RS:1;jenkins-hbase4:45007] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 20:10:33,431 DEBUG [RS:0;jenkins-hbase4:36167] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 20:10:33,431 DEBUG [RS:1;jenkins-hbase4:45007] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 20:10:33,431 DEBUG [RS:0;jenkins-hbase4:36167] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 20:10:33,432 DEBUG [RS:2;jenkins-hbase4:43675] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 20:10:33,434 DEBUG [RS:2;jenkins-hbase4:43675] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 20:10:33,439 DEBUG [RS:1;jenkins-hbase4:45007] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 20:10:33,439 DEBUG [RS:2;jenkins-hbase4:43675] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 20:10:33,439 DEBUG [RS:0;jenkins-hbase4:36167] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 20:10:33,441 DEBUG [RS:1;jenkins-hbase4:45007] zookeeper.ReadOnlyZKClient(139): Connect 0x696e3fda to 127.0.0.1:52127 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 20:10:33,441 DEBUG [RS:2;jenkins-hbase4:43675] zookeeper.ReadOnlyZKClient(139): Connect 0x34d28169 to 127.0.0.1:52127 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 20:10:33,442 DEBUG [RS:0;jenkins-hbase4:36167] zookeeper.ReadOnlyZKClient(139): Connect 0x3d1ba26a to 127.0.0.1:52127 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 20:10:33,450 DEBUG [RS:2;jenkins-hbase4:43675] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3a157983, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 20:10:33,450 DEBUG [RS:0;jenkins-hbase4:36167] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@33dc4bb9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 20:10:33,450 DEBUG [RS:1;jenkins-hbase4:45007] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@67a9dc87, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 20:10:33,450 DEBUG [RS:2;jenkins-hbase4:43675] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3e2aea19, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 20:10:33,450 DEBUG [RS:0;jenkins-hbase4:36167] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@37e3c91a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 20:10:33,451 DEBUG [RS:1;jenkins-hbase4:45007] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@639654aa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 20:10:33,486 DEBUG [RS:2;jenkins-hbase4:43675] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:43675 2023-07-24 20:10:33,488 DEBUG [RS:0;jenkins-hbase4:36167] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:36167 2023-07-24 20:10:33,491 DEBUG [RS:1;jenkins-hbase4:45007] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:45007 2023-07-24 20:10:33,494 INFO [RS:0;jenkins-hbase4:36167] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 20:10:33,494 INFO [RS:2;jenkins-hbase4:43675] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 20:10:33,495 INFO [RS:2;jenkins-hbase4:43675] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 20:10:33,495 INFO [RS:1;jenkins-hbase4:45007] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 20:10:33,495 INFO [RS:1;jenkins-hbase4:45007] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 20:10:33,495 INFO [RS:0;jenkins-hbase4:36167] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 20:10:33,495 DEBUG [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 20:10:33,495 DEBUG [RS:2;jenkins-hbase4:43675] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 20:10:33,495 DEBUG [RS:0;jenkins-hbase4:36167] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 20:10:33,499 INFO [RS:2;jenkins-hbase4:43675] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43005,1690229429933 with isa=jenkins-hbase4.apache.org/172.31.14.131:43675, startcode=1690229432113 2023-07-24 20:10:33,503 INFO [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43005,1690229429933 with isa=jenkins-hbase4.apache.org/172.31.14.131:45007, startcode=1690229432026 2023-07-24 20:10:33,505 INFO [RS:0;jenkins-hbase4:36167] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43005,1690229429933 with isa=jenkins-hbase4.apache.org/172.31.14.131:36167, startcode=1690229431901 2023-07-24 20:10:33,511 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-24 20:10:33,524 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-24 20:10:33,526 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43005,1690229429933] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 20:10:33,526 DEBUG [RS:0;jenkins-hbase4:36167] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 20:10:33,527 DEBUG [RS:1;jenkins-hbase4:45007] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 20:10:33,526 DEBUG [RS:2;jenkins-hbase4:43675] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 20:10:33,529 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-24 20:10:33,529 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-24 20:10:33,651 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45147, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 20:10:33,651 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57309, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 20:10:33,651 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38333, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 20:10:33,667 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:33,680 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:33,682 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:33,698 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-24 20:10:33,716 DEBUG [RS:2;jenkins-hbase4:43675] regionserver.HRegionServer(2830): Master is not running yet 2023-07-24 20:10:33,716 DEBUG [RS:0;jenkins-hbase4:36167] regionserver.HRegionServer(2830): Master is not running yet 2023-07-24 20:10:33,717 WARN [RS:2;jenkins-hbase4:43675] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-24 20:10:33,716 DEBUG [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer(2830): Master is not running yet 2023-07-24 20:10:33,717 WARN [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-24 20:10:33,717 WARN [RS:0;jenkins-hbase4:36167] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-24 20:10:33,752 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 20:10:33,759 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 20:10:33,760 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 20:10:33,760 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 20:10:33,762 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 20:10:33,763 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 20:10:33,763 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 20:10:33,763 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 20:10:33,763 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-24 20:10:33,763 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:33,763 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 20:10:33,763 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:33,781 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690229463781 2023-07-24 20:10:33,784 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-24 20:10:33,787 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 20:10:33,788 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-24 20:10:33,791 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-24 20:10:33,796 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 20:10:33,806 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-24 20:10:33,807 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-24 20:10:33,807 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-24 20:10:33,807 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-24 20:10:33,809 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:33,811 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-24 20:10:33,813 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-24 20:10:33,813 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-24 20:10:33,816 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-24 20:10:33,817 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-24 20:10:33,818 INFO [RS:2;jenkins-hbase4:43675] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43005,1690229429933 with isa=jenkins-hbase4.apache.org/172.31.14.131:43675, startcode=1690229432113 2023-07-24 20:10:33,819 INFO [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43005,1690229429933 with isa=jenkins-hbase4.apache.org/172.31.14.131:45007, startcode=1690229432026 2023-07-24 20:10:33,820 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:33,820 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:33,824 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690229433821,5,FailOnTimeoutGroup] 2023-07-24 20:10:33,824 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690229433824,5,FailOnTimeoutGroup] 2023-07-24 20:10:33,825 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:33,825 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-24 20:10:33,825 INFO [RS:0;jenkins-hbase4:36167] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43005,1690229429933 with isa=jenkins-hbase4.apache.org/172.31.14.131:36167, startcode=1690229431901 2023-07-24 20:10:33,828 DEBUG [RS:2;jenkins-hbase4:43675] regionserver.HRegionServer(2830): Master is not running yet 2023-07-24 20:10:33,828 DEBUG [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer(2830): Master is not running yet 2023-07-24 20:10:33,828 WARN [RS:2;jenkins-hbase4:43675] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 200 ms and then retrying. 2023-07-24 20:10:33,828 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:33,828 WARN [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 200 ms and then retrying. 2023-07-24 20:10:33,828 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:33,834 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43005] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:33,836 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43005,1690229429933] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 20:10:33,842 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43005,1690229429933] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-24 20:10:33,844 DEBUG [RS:0;jenkins-hbase4:36167] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1 2023-07-24 20:10:33,844 DEBUG [RS:0;jenkins-hbase4:36167] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:32791 2023-07-24 20:10:33,844 DEBUG [RS:0;jenkins-hbase4:36167] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34897 2023-07-24 20:10:33,851 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:10:33,852 DEBUG [RS:0;jenkins-hbase4:36167] zookeeper.ZKUtil(162): regionserver:36167-0x10198f4d5ba0001, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:33,852 WARN [RS:0;jenkins-hbase4:36167] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 20:10:33,853 INFO [RS:0;jenkins-hbase4:36167] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 20:10:33,857 DEBUG [RS:0;jenkins-hbase4:36167] regionserver.HRegionServer(1948): logDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/WALs/jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:33,858 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36167,1690229431901] 2023-07-24 20:10:33,886 DEBUG [RS:0;jenkins-hbase4:36167] zookeeper.ZKUtil(162): regionserver:36167-0x10198f4d5ba0001, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:33,901 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 20:10:33,903 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 20:10:33,903 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1 2023-07-24 20:10:33,905 DEBUG [RS:0;jenkins-hbase4:36167] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 20:10:33,918 INFO [RS:0;jenkins-hbase4:36167] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 20:10:33,979 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:33,983 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 20:10:33,984 INFO [RS:0;jenkins-hbase4:36167] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 20:10:33,986 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740/info 2023-07-24 20:10:33,988 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 20:10:33,989 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:33,989 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 20:10:33,989 INFO [RS:0;jenkins-hbase4:36167] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 20:10:33,990 INFO [RS:0;jenkins-hbase4:36167] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:33,991 INFO [RS:0;jenkins-hbase4:36167] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 20:10:33,993 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740/rep_barrier 2023-07-24 20:10:33,994 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 20:10:33,995 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:33,995 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 20:10:33,998 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740/table 2023-07-24 20:10:33,999 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 20:10:34,000 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:34,002 INFO [RS:0;jenkins-hbase4:36167] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:34,002 DEBUG [RS:0;jenkins-hbase4:36167] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,003 DEBUG [RS:0;jenkins-hbase4:36167] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,003 DEBUG [RS:0;jenkins-hbase4:36167] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,003 DEBUG [RS:0;jenkins-hbase4:36167] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,003 DEBUG [RS:0;jenkins-hbase4:36167] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,003 DEBUG [RS:0;jenkins-hbase4:36167] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 20:10:34,003 DEBUG [RS:0;jenkins-hbase4:36167] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,004 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740 2023-07-24 20:10:34,004 DEBUG [RS:0;jenkins-hbase4:36167] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,004 DEBUG [RS:0;jenkins-hbase4:36167] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,004 DEBUG [RS:0;jenkins-hbase4:36167] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,005 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740 2023-07-24 20:10:34,010 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 20:10:34,013 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 20:10:34,015 INFO [RS:0;jenkins-hbase4:36167] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:34,015 INFO [RS:0;jenkins-hbase4:36167] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:34,015 INFO [RS:0;jenkins-hbase4:36167] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:34,017 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:10:34,018 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9419934400, jitterRate=-0.12270024418830872}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 20:10:34,018 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 20:10:34,018 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 20:10:34,018 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 20:10:34,018 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 20:10:34,019 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 20:10:34,019 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 20:10:34,020 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 20:10:34,020 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 20:10:34,029 INFO [RS:2;jenkins-hbase4:43675] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43005,1690229429933 with isa=jenkins-hbase4.apache.org/172.31.14.131:43675, startcode=1690229432113 2023-07-24 20:10:34,029 INFO [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43005,1690229429933 with isa=jenkins-hbase4.apache.org/172.31.14.131:45007, startcode=1690229432026 2023-07-24 20:10:34,030 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43005] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:34,031 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43005,1690229429933] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 20:10:34,031 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43005,1690229429933] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-24 20:10:34,031 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43005] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:34,031 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43005,1690229429933] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 20:10:34,032 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43005,1690229429933] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-24 20:10:34,033 DEBUG [RS:2;jenkins-hbase4:43675] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1 2023-07-24 20:10:34,034 DEBUG [RS:2;jenkins-hbase4:43675] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:32791 2023-07-24 20:10:34,034 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 20:10:34,034 DEBUG [RS:2;jenkins-hbase4:43675] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34897 2023-07-24 20:10:34,034 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-24 20:10:34,034 DEBUG [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1 2023-07-24 20:10:34,035 DEBUG [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:32791 2023-07-24 20:10:34,035 DEBUG [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34897 2023-07-24 20:10:34,039 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:10:34,039 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:36167-0x10198f4d5ba0001, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:10:34,040 DEBUG [RS:2;jenkins-hbase4:43675] zookeeper.ZKUtil(162): regionserver:43675-0x10198f4d5ba0003, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:34,040 WARN [RS:2;jenkins-hbase4:43675] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 20:10:34,040 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43675,1690229432113] 2023-07-24 20:10:34,041 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,45007,1690229432026] 2023-07-24 20:10:34,041 INFO [RS:0;jenkins-hbase4:36167] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 20:10:34,040 INFO [RS:2;jenkins-hbase4:43675] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 20:10:34,042 DEBUG [RS:2;jenkins-hbase4:43675] regionserver.HRegionServer(1948): logDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/WALs/jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:34,042 DEBUG [RS:1;jenkins-hbase4:45007] zookeeper.ZKUtil(162): regionserver:45007-0x10198f4d5ba0002, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:34,043 WARN [RS:1;jenkins-hbase4:45007] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 20:10:34,043 INFO [RS:1;jenkins-hbase4:45007] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 20:10:34,043 DEBUG [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer(1948): logDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/WALs/jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:34,042 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36167-0x10198f4d5ba0001, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:34,044 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36167-0x10198f4d5ba0001, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:34,045 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36167-0x10198f4d5ba0001, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:34,045 INFO [RS:0;jenkins-hbase4:36167] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36167,1690229431901-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:34,058 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-24 20:10:34,061 DEBUG [RS:1;jenkins-hbase4:45007] zookeeper.ZKUtil(162): regionserver:45007-0x10198f4d5ba0002, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:34,061 DEBUG [RS:2;jenkins-hbase4:43675] zookeeper.ZKUtil(162): regionserver:43675-0x10198f4d5ba0003, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:34,062 DEBUG [RS:1;jenkins-hbase4:45007] zookeeper.ZKUtil(162): regionserver:45007-0x10198f4d5ba0002, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:34,062 DEBUG [RS:2;jenkins-hbase4:43675] zookeeper.ZKUtil(162): regionserver:43675-0x10198f4d5ba0003, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:34,062 DEBUG [RS:1;jenkins-hbase4:45007] zookeeper.ZKUtil(162): regionserver:45007-0x10198f4d5ba0002, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:34,063 DEBUG [RS:2;jenkins-hbase4:43675] zookeeper.ZKUtil(162): regionserver:43675-0x10198f4d5ba0003, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:34,064 DEBUG [RS:1;jenkins-hbase4:45007] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 20:10:34,066 INFO [RS:1;jenkins-hbase4:45007] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 20:10:34,066 DEBUG [RS:2;jenkins-hbase4:43675] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 20:10:34,072 INFO [RS:2;jenkins-hbase4:43675] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 20:10:34,095 INFO [RS:0;jenkins-hbase4:36167] regionserver.Replication(203): jenkins-hbase4.apache.org,36167,1690229431901 started 2023-07-24 20:10:34,095 INFO [RS:0;jenkins-hbase4:36167] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36167,1690229431901, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36167, sessionid=0x10198f4d5ba0001 2023-07-24 20:10:34,102 DEBUG [RS:0;jenkins-hbase4:36167] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 20:10:34,103 DEBUG [RS:0;jenkins-hbase4:36167] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:34,103 DEBUG [RS:0;jenkins-hbase4:36167] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36167,1690229431901' 2023-07-24 20:10:34,103 DEBUG [RS:0;jenkins-hbase4:36167] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 20:10:34,103 INFO [RS:2;jenkins-hbase4:43675] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 20:10:34,108 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-24 20:10:34,114 INFO [RS:2;jenkins-hbase4:43675] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 20:10:34,115 INFO [RS:1;jenkins-hbase4:45007] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 20:10:34,115 INFO [RS:2;jenkins-hbase4:43675] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:34,119 INFO [RS:1;jenkins-hbase4:45007] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 20:10:34,119 INFO [RS:1;jenkins-hbase4:45007] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:34,119 DEBUG [RS:0;jenkins-hbase4:36167] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 20:10:34,120 DEBUG [RS:0;jenkins-hbase4:36167] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 20:10:34,120 DEBUG [RS:0;jenkins-hbase4:36167] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 20:10:34,120 DEBUG [RS:0;jenkins-hbase4:36167] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:34,120 DEBUG [RS:0;jenkins-hbase4:36167] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36167,1690229431901' 2023-07-24 20:10:34,120 DEBUG [RS:0;jenkins-hbase4:36167] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 20:10:34,121 DEBUG [RS:0;jenkins-hbase4:36167] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 20:10:34,122 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-24 20:10:34,122 DEBUG [RS:0;jenkins-hbase4:36167] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 20:10:34,122 INFO [RS:0;jenkins-hbase4:36167] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 20:10:34,122 INFO [RS:0;jenkins-hbase4:36167] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 20:10:34,126 INFO [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 20:10:34,127 INFO [RS:2;jenkins-hbase4:43675] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 20:10:34,130 INFO [RS:2;jenkins-hbase4:43675] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:34,130 INFO [RS:1;jenkins-hbase4:45007] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:34,130 DEBUG [RS:2;jenkins-hbase4:43675] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,133 DEBUG [RS:2;jenkins-hbase4:43675] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,133 DEBUG [RS:1;jenkins-hbase4:45007] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,133 DEBUG [RS:2;jenkins-hbase4:43675] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,134 DEBUG [RS:1;jenkins-hbase4:45007] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,134 DEBUG [RS:2;jenkins-hbase4:43675] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,134 DEBUG [RS:1;jenkins-hbase4:45007] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,134 DEBUG [RS:2;jenkins-hbase4:43675] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,134 DEBUG [RS:1;jenkins-hbase4:45007] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,134 DEBUG [RS:2;jenkins-hbase4:43675] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 20:10:34,134 DEBUG [RS:1;jenkins-hbase4:45007] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,134 DEBUG [RS:2;jenkins-hbase4:43675] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,134 DEBUG [RS:1;jenkins-hbase4:45007] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 20:10:34,134 DEBUG [RS:2;jenkins-hbase4:43675] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,134 DEBUG [RS:1;jenkins-hbase4:45007] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,134 DEBUG [RS:2;jenkins-hbase4:43675] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,135 DEBUG [RS:1;jenkins-hbase4:45007] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,135 DEBUG [RS:2;jenkins-hbase4:43675] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,135 DEBUG [RS:1;jenkins-hbase4:45007] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,135 DEBUG [RS:1;jenkins-hbase4:45007] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:34,140 INFO [RS:2;jenkins-hbase4:43675] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:34,140 INFO [RS:2;jenkins-hbase4:43675] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:34,140 INFO [RS:2;jenkins-hbase4:43675] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:34,142 INFO [RS:1;jenkins-hbase4:45007] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:34,143 INFO [RS:1;jenkins-hbase4:45007] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:34,143 INFO [RS:1;jenkins-hbase4:45007] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:34,159 INFO [RS:2;jenkins-hbase4:43675] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 20:10:34,160 INFO [RS:2;jenkins-hbase4:43675] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43675,1690229432113-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:34,160 INFO [RS:1;jenkins-hbase4:45007] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 20:10:34,161 INFO [RS:1;jenkins-hbase4:45007] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45007,1690229432026-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:34,174 INFO [RS:1;jenkins-hbase4:45007] regionserver.Replication(203): jenkins-hbase4.apache.org,45007,1690229432026 started 2023-07-24 20:10:34,174 INFO [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,45007,1690229432026, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:45007, sessionid=0x10198f4d5ba0002 2023-07-24 20:10:34,174 DEBUG [RS:1;jenkins-hbase4:45007] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 20:10:34,174 DEBUG [RS:1;jenkins-hbase4:45007] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:34,175 DEBUG [RS:1;jenkins-hbase4:45007] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45007,1690229432026' 2023-07-24 20:10:34,175 DEBUG [RS:1;jenkins-hbase4:45007] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 20:10:34,176 DEBUG [RS:1;jenkins-hbase4:45007] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 20:10:34,176 DEBUG [RS:1;jenkins-hbase4:45007] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 20:10:34,176 DEBUG [RS:1;jenkins-hbase4:45007] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 20:10:34,176 DEBUG [RS:1;jenkins-hbase4:45007] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:34,177 DEBUG [RS:1;jenkins-hbase4:45007] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45007,1690229432026' 2023-07-24 20:10:34,177 DEBUG [RS:1;jenkins-hbase4:45007] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 20:10:34,177 DEBUG [RS:1;jenkins-hbase4:45007] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 20:10:34,178 DEBUG [RS:1;jenkins-hbase4:45007] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 20:10:34,178 INFO [RS:1;jenkins-hbase4:45007] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 20:10:34,178 INFO [RS:1;jenkins-hbase4:45007] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 20:10:34,182 INFO [RS:2;jenkins-hbase4:43675] regionserver.Replication(203): jenkins-hbase4.apache.org,43675,1690229432113 started 2023-07-24 20:10:34,183 INFO [RS:2;jenkins-hbase4:43675] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43675,1690229432113, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43675, sessionid=0x10198f4d5ba0003 2023-07-24 20:10:34,183 DEBUG [RS:2;jenkins-hbase4:43675] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 20:10:34,183 DEBUG [RS:2;jenkins-hbase4:43675] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:34,183 DEBUG [RS:2;jenkins-hbase4:43675] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43675,1690229432113' 2023-07-24 20:10:34,183 DEBUG [RS:2;jenkins-hbase4:43675] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 20:10:34,183 DEBUG [RS:2;jenkins-hbase4:43675] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 20:10:34,184 DEBUG [RS:2;jenkins-hbase4:43675] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 20:10:34,184 DEBUG [RS:2;jenkins-hbase4:43675] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 20:10:34,184 DEBUG [RS:2;jenkins-hbase4:43675] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:34,184 DEBUG [RS:2;jenkins-hbase4:43675] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43675,1690229432113' 2023-07-24 20:10:34,184 DEBUG [RS:2;jenkins-hbase4:43675] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 20:10:34,185 DEBUG [RS:2;jenkins-hbase4:43675] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 20:10:34,185 DEBUG [RS:2;jenkins-hbase4:43675] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 20:10:34,185 INFO [RS:2;jenkins-hbase4:43675] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 20:10:34,185 INFO [RS:2;jenkins-hbase4:43675] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 20:10:34,236 INFO [RS:0;jenkins-hbase4:36167] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36167%2C1690229431901, suffix=, logDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/WALs/jenkins-hbase4.apache.org,36167,1690229431901, archiveDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/oldWALs, maxLogs=32 2023-07-24 20:10:34,261 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41925,DS-4d145b76-665c-4122-a267-81459a7dcebb,DISK] 2023-07-24 20:10:34,261 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33397,DS-4c2bb9cf-42f6-4e01-8e40-6d0e320e306a,DISK] 2023-07-24 20:10:34,261 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37373,DS-c1ea1d19-77ac-4821-abab-73633b2c9332,DISK] 2023-07-24 20:10:34,267 INFO [RS:0;jenkins-hbase4:36167] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/WALs/jenkins-hbase4.apache.org,36167,1690229431901/jenkins-hbase4.apache.org%2C36167%2C1690229431901.1690229434239 2023-07-24 20:10:34,268 DEBUG [RS:0;jenkins-hbase4:36167] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41925,DS-4d145b76-665c-4122-a267-81459a7dcebb,DISK], DatanodeInfoWithStorage[127.0.0.1:33397,DS-4c2bb9cf-42f6-4e01-8e40-6d0e320e306a,DISK], DatanodeInfoWithStorage[127.0.0.1:37373,DS-c1ea1d19-77ac-4821-abab-73633b2c9332,DISK]] 2023-07-24 20:10:34,279 DEBUG [jenkins-hbase4:43005] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 20:10:34,282 INFO [RS:1;jenkins-hbase4:45007] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45007%2C1690229432026, suffix=, logDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/WALs/jenkins-hbase4.apache.org,45007,1690229432026, archiveDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/oldWALs, maxLogs=32 2023-07-24 20:10:34,288 INFO [RS:2;jenkins-hbase4:43675] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43675%2C1690229432113, suffix=, logDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/WALs/jenkins-hbase4.apache.org,43675,1690229432113, archiveDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/oldWALs, maxLogs=32 2023-07-24 20:10:34,304 DEBUG [jenkins-hbase4:43005] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 20:10:34,306 DEBUG [jenkins-hbase4:43005] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 20:10:34,306 DEBUG [jenkins-hbase4:43005] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 20:10:34,306 DEBUG [jenkins-hbase4:43005] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 20:10:34,307 DEBUG [jenkins-hbase4:43005] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 20:10:34,315 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45007,1690229432026, state=OPENING 2023-07-24 20:10:34,318 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33397,DS-4c2bb9cf-42f6-4e01-8e40-6d0e320e306a,DISK] 2023-07-24 20:10:34,318 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41925,DS-4d145b76-665c-4122-a267-81459a7dcebb,DISK] 2023-07-24 20:10:34,324 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37373,DS-c1ea1d19-77ac-4821-abab-73633b2c9332,DISK] 2023-07-24 20:10:34,325 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41925,DS-4d145b76-665c-4122-a267-81459a7dcebb,DISK] 2023-07-24 20:10:34,325 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37373,DS-c1ea1d19-77ac-4821-abab-73633b2c9332,DISK] 2023-07-24 20:10:34,325 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33397,DS-4c2bb9cf-42f6-4e01-8e40-6d0e320e306a,DISK] 2023-07-24 20:10:34,328 DEBUG [PEWorker-2] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-24 20:10:34,330 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:10:34,331 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 20:10:34,334 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45007,1690229432026}] 2023-07-24 20:10:34,341 INFO [RS:1;jenkins-hbase4:45007] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/WALs/jenkins-hbase4.apache.org,45007,1690229432026/jenkins-hbase4.apache.org%2C45007%2C1690229432026.1690229434285 2023-07-24 20:10:34,342 DEBUG [RS:1;jenkins-hbase4:45007] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33397,DS-4c2bb9cf-42f6-4e01-8e40-6d0e320e306a,DISK], DatanodeInfoWithStorage[127.0.0.1:41925,DS-4d145b76-665c-4122-a267-81459a7dcebb,DISK], DatanodeInfoWithStorage[127.0.0.1:37373,DS-c1ea1d19-77ac-4821-abab-73633b2c9332,DISK]] 2023-07-24 20:10:34,353 INFO [RS:2;jenkins-hbase4:43675] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/WALs/jenkins-hbase4.apache.org,43675,1690229432113/jenkins-hbase4.apache.org%2C43675%2C1690229432113.1690229434290 2023-07-24 20:10:34,354 DEBUG [RS:2;jenkins-hbase4:43675] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41925,DS-4d145b76-665c-4122-a267-81459a7dcebb,DISK], DatanodeInfoWithStorage[127.0.0.1:37373,DS-c1ea1d19-77ac-4821-abab-73633b2c9332,DISK], DatanodeInfoWithStorage[127.0.0.1:33397,DS-4c2bb9cf-42f6-4e01-8e40-6d0e320e306a,DISK]] 2023-07-24 20:10:34,364 WARN [ReadOnlyZKClient-127.0.0.1:52127@0x39ea1c96] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-24 20:10:34,395 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43005,1690229429933] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 20:10:34,402 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38118, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 20:10:34,403 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=45007] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:38118 deadline: 1690229494402, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:34,536 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:34,543 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 20:10:34,548 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38132, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 20:10:34,561 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-24 20:10:34,561 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 20:10:34,566 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45007%2C1690229432026.meta, suffix=.meta, logDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/WALs/jenkins-hbase4.apache.org,45007,1690229432026, archiveDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/oldWALs, maxLogs=32 2023-07-24 20:10:34,583 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41925,DS-4d145b76-665c-4122-a267-81459a7dcebb,DISK] 2023-07-24 20:10:34,584 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33397,DS-4c2bb9cf-42f6-4e01-8e40-6d0e320e306a,DISK] 2023-07-24 20:10:34,585 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37373,DS-c1ea1d19-77ac-4821-abab-73633b2c9332,DISK] 2023-07-24 20:10:34,591 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/WALs/jenkins-hbase4.apache.org,45007,1690229432026/jenkins-hbase4.apache.org%2C45007%2C1690229432026.meta.1690229434567.meta 2023-07-24 20:10:34,592 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41925,DS-4d145b76-665c-4122-a267-81459a7dcebb,DISK], DatanodeInfoWithStorage[127.0.0.1:33397,DS-4c2bb9cf-42f6-4e01-8e40-6d0e320e306a,DISK], DatanodeInfoWithStorage[127.0.0.1:37373,DS-c1ea1d19-77ac-4821-abab-73633b2c9332,DISK]] 2023-07-24 20:10:34,592 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:10:34,594 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 20:10:34,597 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-24 20:10:34,599 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-24 20:10:34,605 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-24 20:10:34,605 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:34,606 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-24 20:10:34,606 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-24 20:10:34,613 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 20:10:34,615 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740/info 2023-07-24 20:10:34,616 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740/info 2023-07-24 20:10:34,617 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 20:10:34,619 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:34,619 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 20:10:34,621 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740/rep_barrier 2023-07-24 20:10:34,621 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740/rep_barrier 2023-07-24 20:10:34,621 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 20:10:34,622 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:34,623 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 20:10:34,625 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740/table 2023-07-24 20:10:34,625 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740/table 2023-07-24 20:10:34,625 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 20:10:34,626 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:34,627 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740 2023-07-24 20:10:34,632 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740 2023-07-24 20:10:34,636 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 20:10:34,638 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 20:10:34,639 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10442348000, jitterRate=-0.027480557560920715}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 20:10:34,639 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 20:10:34,652 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690229434533 2023-07-24 20:10:34,697 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-24 20:10:34,698 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-24 20:10:34,702 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45007,1690229432026, state=OPEN 2023-07-24 20:10:34,706 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 20:10:34,706 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 20:10:34,712 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-24 20:10:34,712 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45007,1690229432026 in 372 msec 2023-07-24 20:10:34,725 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-24 20:10:34,725 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 657 msec 2023-07-24 20:10:34,733 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.1930 sec 2023-07-24 20:10:34,733 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690229434733, completionTime=-1 2023-07-24 20:10:34,733 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-24 20:10:34,734 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-24 20:10:34,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-24 20:10:34,795 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690229494795 2023-07-24 20:10:34,795 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690229554795 2023-07-24 20:10:34,795 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 61 msec 2023-07-24 20:10:34,814 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43005,1690229429933-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:34,815 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43005,1690229429933-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:34,815 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43005,1690229429933-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:34,818 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:43005, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:34,819 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:34,825 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-24 20:10:34,846 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-24 20:10:34,848 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 20:10:34,864 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-24 20:10:34,867 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 20:10:34,871 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 20:10:34,895 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/hbase/namespace/05a513489858765cac84f66c0c371109 2023-07-24 20:10:34,899 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/hbase/namespace/05a513489858765cac84f66c0c371109 empty. 2023-07-24 20:10:34,900 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/hbase/namespace/05a513489858765cac84f66c0c371109 2023-07-24 20:10:34,900 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-24 20:10:34,921 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43005,1690229429933] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 20:10:34,927 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43005,1690229429933] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-24 20:10:34,938 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 20:10:34,952 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 20:10:34,960 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-24 20:10:34,963 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/hbase/rsgroup/fa4520efe7f6261623ea4e6234947d9e 2023-07-24 20:10:34,963 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 05a513489858765cac84f66c0c371109, NAME => 'hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp 2023-07-24 20:10:34,964 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/hbase/rsgroup/fa4520efe7f6261623ea4e6234947d9e empty. 2023-07-24 20:10:34,964 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/hbase/rsgroup/fa4520efe7f6261623ea4e6234947d9e 2023-07-24 20:10:34,965 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-24 20:10:35,016 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:35,016 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 05a513489858765cac84f66c0c371109, disabling compactions & flushes 2023-07-24 20:10:35,017 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109. 2023-07-24 20:10:35,017 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109. 2023-07-24 20:10:35,017 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109. after waiting 0 ms 2023-07-24 20:10:35,017 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109. 2023-07-24 20:10:35,017 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109. 2023-07-24 20:10:35,017 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 05a513489858765cac84f66c0c371109: 2023-07-24 20:10:35,025 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-24 20:10:35,026 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 20:10:35,029 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => fa4520efe7f6261623ea4e6234947d9e, NAME => 'hbase:rsgroup,,1690229434920.fa4520efe7f6261623ea4e6234947d9e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp 2023-07-24 20:10:35,048 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690229435030"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229435030"}]},"ts":"1690229435030"} 2023-07-24 20:10:35,115 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690229434920.fa4520efe7f6261623ea4e6234947d9e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:35,116 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing fa4520efe7f6261623ea4e6234947d9e, disabling compactions & flushes 2023-07-24 20:10:35,116 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690229434920.fa4520efe7f6261623ea4e6234947d9e. 2023-07-24 20:10:35,116 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690229434920.fa4520efe7f6261623ea4e6234947d9e. 2023-07-24 20:10:35,116 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690229434920.fa4520efe7f6261623ea4e6234947d9e. after waiting 0 ms 2023-07-24 20:10:35,116 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690229434920.fa4520efe7f6261623ea4e6234947d9e. 2023-07-24 20:10:35,116 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690229434920.fa4520efe7f6261623ea4e6234947d9e. 2023-07-24 20:10:35,116 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for fa4520efe7f6261623ea4e6234947d9e: 2023-07-24 20:10:35,117 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 20:10:35,120 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 20:10:35,122 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 20:10:35,124 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690229434920.fa4520efe7f6261623ea4e6234947d9e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690229435124"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229435124"}]},"ts":"1690229435124"} 2023-07-24 20:10:35,126 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229435120"}]},"ts":"1690229435120"} 2023-07-24 20:10:35,129 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 20:10:35,131 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 20:10:35,131 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229435131"}]},"ts":"1690229435131"} 2023-07-24 20:10:35,132 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-24 20:10:35,134 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-24 20:10:35,136 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 20:10:35,137 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 20:10:35,137 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 20:10:35,137 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 20:10:35,137 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 20:10:35,139 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=05a513489858765cac84f66c0c371109, ASSIGN}] 2023-07-24 20:10:35,141 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 20:10:35,141 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 20:10:35,141 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 20:10:35,141 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 20:10:35,141 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 20:10:35,141 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=fa4520efe7f6261623ea4e6234947d9e, ASSIGN}] 2023-07-24 20:10:35,143 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=05a513489858765cac84f66c0c371109, ASSIGN 2023-07-24 20:10:35,146 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=05a513489858765cac84f66c0c371109, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43675,1690229432113; forceNewPlan=false, retain=false 2023-07-24 20:10:35,147 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=fa4520efe7f6261623ea4e6234947d9e, ASSIGN 2023-07-24 20:10:35,149 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=fa4520efe7f6261623ea4e6234947d9e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45007,1690229432026; forceNewPlan=false, retain=false 2023-07-24 20:10:35,150 INFO [jenkins-hbase4:43005] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-24 20:10:35,152 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=05a513489858765cac84f66c0c371109, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:35,152 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=fa4520efe7f6261623ea4e6234947d9e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:35,152 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690229435152"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229435152"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229435152"}]},"ts":"1690229435152"} 2023-07-24 20:10:35,153 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690229434920.fa4520efe7f6261623ea4e6234947d9e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690229435152"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229435152"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229435152"}]},"ts":"1690229435152"} 2023-07-24 20:10:35,160 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 05a513489858765cac84f66c0c371109, server=jenkins-hbase4.apache.org,43675,1690229432113}] 2023-07-24 20:10:35,163 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure fa4520efe7f6261623ea4e6234947d9e, server=jenkins-hbase4.apache.org,45007,1690229432026}] 2023-07-24 20:10:35,315 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:35,316 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 20:10:35,319 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35376, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 20:10:35,331 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109. 2023-07-24 20:10:35,333 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 05a513489858765cac84f66c0c371109, NAME => 'hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109.', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:10:35,334 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 05a513489858765cac84f66c0c371109 2023-07-24 20:10:35,334 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690229434920.fa4520efe7f6261623ea4e6234947d9e. 2023-07-24 20:10:35,334 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:35,334 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 05a513489858765cac84f66c0c371109 2023-07-24 20:10:35,334 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fa4520efe7f6261623ea4e6234947d9e, NAME => 'hbase:rsgroup,,1690229434920.fa4520efe7f6261623ea4e6234947d9e.', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:10:35,334 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 05a513489858765cac84f66c0c371109 2023-07-24 20:10:35,335 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 20:10:35,335 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690229434920.fa4520efe7f6261623ea4e6234947d9e. service=MultiRowMutationService 2023-07-24 20:10:35,335 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-24 20:10:35,335 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup fa4520efe7f6261623ea4e6234947d9e 2023-07-24 20:10:35,336 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690229434920.fa4520efe7f6261623ea4e6234947d9e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:35,336 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fa4520efe7f6261623ea4e6234947d9e 2023-07-24 20:10:35,336 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fa4520efe7f6261623ea4e6234947d9e 2023-07-24 20:10:35,339 INFO [StoreOpener-05a513489858765cac84f66c0c371109-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 05a513489858765cac84f66c0c371109 2023-07-24 20:10:35,339 INFO [StoreOpener-fa4520efe7f6261623ea4e6234947d9e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region fa4520efe7f6261623ea4e6234947d9e 2023-07-24 20:10:35,342 DEBUG [StoreOpener-05a513489858765cac84f66c0c371109-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/namespace/05a513489858765cac84f66c0c371109/info 2023-07-24 20:10:35,342 DEBUG [StoreOpener-05a513489858765cac84f66c0c371109-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/namespace/05a513489858765cac84f66c0c371109/info 2023-07-24 20:10:35,342 DEBUG [StoreOpener-fa4520efe7f6261623ea4e6234947d9e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/rsgroup/fa4520efe7f6261623ea4e6234947d9e/m 2023-07-24 20:10:35,342 DEBUG [StoreOpener-fa4520efe7f6261623ea4e6234947d9e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/rsgroup/fa4520efe7f6261623ea4e6234947d9e/m 2023-07-24 20:10:35,342 INFO [StoreOpener-05a513489858765cac84f66c0c371109-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 05a513489858765cac84f66c0c371109 columnFamilyName info 2023-07-24 20:10:35,343 INFO [StoreOpener-fa4520efe7f6261623ea4e6234947d9e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fa4520efe7f6261623ea4e6234947d9e columnFamilyName m 2023-07-24 20:10:35,343 INFO [StoreOpener-05a513489858765cac84f66c0c371109-1] regionserver.HStore(310): Store=05a513489858765cac84f66c0c371109/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:35,344 INFO [StoreOpener-fa4520efe7f6261623ea4e6234947d9e-1] regionserver.HStore(310): Store=fa4520efe7f6261623ea4e6234947d9e/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:35,346 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/namespace/05a513489858765cac84f66c0c371109 2023-07-24 20:10:35,346 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/rsgroup/fa4520efe7f6261623ea4e6234947d9e 2023-07-24 20:10:35,346 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/namespace/05a513489858765cac84f66c0c371109 2023-07-24 20:10:35,347 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/rsgroup/fa4520efe7f6261623ea4e6234947d9e 2023-07-24 20:10:35,351 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 05a513489858765cac84f66c0c371109 2023-07-24 20:10:35,352 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fa4520efe7f6261623ea4e6234947d9e 2023-07-24 20:10:35,359 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/namespace/05a513489858765cac84f66c0c371109/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:10:35,359 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/rsgroup/fa4520efe7f6261623ea4e6234947d9e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:10:35,360 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 05a513489858765cac84f66c0c371109; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10459623040, jitterRate=-0.025871694087982178}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:35,360 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 05a513489858765cac84f66c0c371109: 2023-07-24 20:10:35,360 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fa4520efe7f6261623ea4e6234947d9e; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@3c4e6310, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:35,362 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fa4520efe7f6261623ea4e6234947d9e: 2023-07-24 20:10:35,362 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109., pid=8, masterSystemTime=1690229435315 2023-07-24 20:10:35,366 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690229434920.fa4520efe7f6261623ea4e6234947d9e., pid=9, masterSystemTime=1690229435317 2023-07-24 20:10:35,375 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=fa4520efe7f6261623ea4e6234947d9e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:35,375 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690229434920.fa4520efe7f6261623ea4e6234947d9e. 2023-07-24 20:10:35,376 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690229434920.fa4520efe7f6261623ea4e6234947d9e. 2023-07-24 20:10:35,376 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690229434920.fa4520efe7f6261623ea4e6234947d9e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690229435375"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229435375"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229435375"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229435375"}]},"ts":"1690229435375"} 2023-07-24 20:10:35,377 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109. 2023-07-24 20:10:35,378 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109. 2023-07-24 20:10:35,378 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=05a513489858765cac84f66c0c371109, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:35,380 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690229435378"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229435378"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229435378"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229435378"}]},"ts":"1690229435378"} 2023-07-24 20:10:35,389 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-24 20:10:35,389 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure fa4520efe7f6261623ea4e6234947d9e, server=jenkins-hbase4.apache.org,45007,1690229432026 in 219 msec 2023-07-24 20:10:35,391 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-24 20:10:35,392 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 05a513489858765cac84f66c0c371109, server=jenkins-hbase4.apache.org,43675,1690229432113 in 225 msec 2023-07-24 20:10:35,397 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-24 20:10:35,397 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=fa4520efe7f6261623ea4e6234947d9e, ASSIGN in 248 msec 2023-07-24 20:10:35,397 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-24 20:10:35,397 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=05a513489858765cac84f66c0c371109, ASSIGN in 253 msec 2023-07-24 20:10:35,399 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 20:10:35,399 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229435399"}]},"ts":"1690229435399"} 2023-07-24 20:10:35,399 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 20:10:35,400 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229435400"}]},"ts":"1690229435400"} 2023-07-24 20:10:35,402 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-24 20:10:35,404 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-24 20:10:35,407 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 20:10:35,407 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 20:10:35,412 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 486 msec 2023-07-24 20:10:35,412 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 557 msec 2023-07-24 20:10:35,465 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43005,1690229429933] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-24 20:10:35,465 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43005,1690229429933] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-24 20:10:35,468 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-24 20:10:35,470 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-24 20:10:35,470 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:10:35,484 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 20:10:35,486 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35390, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 20:10:35,515 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-24 20:10:35,536 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 20:10:35,543 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 49 msec 2023-07-24 20:10:35,550 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-24 20:10:35,569 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 20:10:35,577 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 28 msec 2023-07-24 20:10:35,589 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-24 20:10:35,592 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-24 20:10:35,592 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.364sec 2023-07-24 20:10:35,595 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-24 20:10:35,597 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-24 20:10:35,597 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-24 20:10:35,599 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43005,1690229429933-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-24 20:10:35,600 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43005,1690229429933-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-24 20:10:35,603 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:10:35,603 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43005,1690229429933] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:35,606 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43005,1690229429933] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 20:10:35,613 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43005,1690229429933] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-24 20:10:35,616 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-24 20:10:35,639 DEBUG [Listener at localhost/39801] zookeeper.ReadOnlyZKClient(139): Connect 0x1a276cd2 to 127.0.0.1:52127 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 20:10:35,645 DEBUG [Listener at localhost/39801] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@47d8ccec, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 20:10:35,660 DEBUG [hconnection-0x7619449a-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 20:10:35,676 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38140, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 20:10:35,687 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,43005,1690229429933 2023-07-24 20:10:35,689 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:35,701 DEBUG [Listener at localhost/39801] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-24 20:10:35,711 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60832, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-24 20:10:35,728 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-24 20:10:35,728 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:10:35,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-24 20:10:35,736 DEBUG [Listener at localhost/39801] zookeeper.ReadOnlyZKClient(139): Connect 0x51d48076 to 127.0.0.1:52127 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 20:10:35,742 DEBUG [Listener at localhost/39801] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@61cb1bf4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 20:10:35,742 INFO [Listener at localhost/39801] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:52127 2023-07-24 20:10:35,746 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 20:10:35,749 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10198f4d5ba000a connected 2023-07-24 20:10:35,784 INFO [Listener at localhost/39801] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=420, OpenFileDescriptor=684, MaxFileDescriptor=60000, SystemLoadAverage=401, ProcessCount=176, AvailableMemoryMB=6655 2023-07-24 20:10:35,787 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-24 20:10:35,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:35,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:35,873 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-24 20:10:35,886 INFO [Listener at localhost/39801] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 20:10:35,886 INFO [Listener at localhost/39801] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:35,886 INFO [Listener at localhost/39801] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:35,887 INFO [Listener at localhost/39801] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 20:10:35,887 INFO [Listener at localhost/39801] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:35,887 INFO [Listener at localhost/39801] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 20:10:35,887 INFO [Listener at localhost/39801] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 20:10:35,889 INFO [Listener at localhost/39801] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36633 2023-07-24 20:10:35,890 INFO [Listener at localhost/39801] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 20:10:35,891 DEBUG [Listener at localhost/39801] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 20:10:35,893 INFO [Listener at localhost/39801] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:10:35,897 INFO [Listener at localhost/39801] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:10:35,900 INFO [Listener at localhost/39801] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36633 connecting to ZooKeeper ensemble=127.0.0.1:52127 2023-07-24 20:10:35,908 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:366330x0, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 20:10:35,909 DEBUG [Listener at localhost/39801] zookeeper.ZKUtil(162): regionserver:366330x0, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 20:10:35,910 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36633-0x10198f4d5ba000b connected 2023-07-24 20:10:35,911 DEBUG [Listener at localhost/39801] zookeeper.ZKUtil(162): regionserver:36633-0x10198f4d5ba000b, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-24 20:10:35,912 DEBUG [Listener at localhost/39801] zookeeper.ZKUtil(164): regionserver:36633-0x10198f4d5ba000b, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 20:10:35,917 DEBUG [Listener at localhost/39801] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36633 2023-07-24 20:10:35,917 DEBUG [Listener at localhost/39801] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36633 2023-07-24 20:10:35,918 DEBUG [Listener at localhost/39801] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36633 2023-07-24 20:10:35,918 DEBUG [Listener at localhost/39801] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36633 2023-07-24 20:10:35,918 DEBUG [Listener at localhost/39801] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36633 2023-07-24 20:10:35,921 INFO [Listener at localhost/39801] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 20:10:35,921 INFO [Listener at localhost/39801] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 20:10:35,921 INFO [Listener at localhost/39801] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 20:10:35,922 INFO [Listener at localhost/39801] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 20:10:35,922 INFO [Listener at localhost/39801] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 20:10:35,922 INFO [Listener at localhost/39801] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 20:10:35,922 INFO [Listener at localhost/39801] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 20:10:35,923 INFO [Listener at localhost/39801] http.HttpServer(1146): Jetty bound to port 45271 2023-07-24 20:10:35,923 INFO [Listener at localhost/39801] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 20:10:35,933 INFO [Listener at localhost/39801] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:35,933 INFO [Listener at localhost/39801] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@20ac2297{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/hadoop.log.dir/,AVAILABLE} 2023-07-24 20:10:35,934 INFO [Listener at localhost/39801] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:35,934 INFO [Listener at localhost/39801] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@70449f05{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 20:10:35,947 INFO [Listener at localhost/39801] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 20:10:35,948 INFO [Listener at localhost/39801] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 20:10:35,949 INFO [Listener at localhost/39801] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 20:10:35,949 INFO [Listener at localhost/39801] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 20:10:35,950 INFO [Listener at localhost/39801] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:35,952 INFO [Listener at localhost/39801] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@57f5533e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 20:10:35,953 INFO [Listener at localhost/39801] server.AbstractConnector(333): Started ServerConnector@10063311{HTTP/1.1, (http/1.1)}{0.0.0.0:45271} 2023-07-24 20:10:35,954 INFO [Listener at localhost/39801] server.Server(415): Started @11908ms 2023-07-24 20:10:35,957 INFO [RS:3;jenkins-hbase4:36633] regionserver.HRegionServer(951): ClusterId : 4747f820-11b4-4442-9950-46fdbe51817f 2023-07-24 20:10:35,957 DEBUG [RS:3;jenkins-hbase4:36633] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 20:10:35,960 DEBUG [RS:3;jenkins-hbase4:36633] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 20:10:35,960 DEBUG [RS:3;jenkins-hbase4:36633] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 20:10:35,963 DEBUG [RS:3;jenkins-hbase4:36633] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 20:10:35,965 DEBUG [RS:3;jenkins-hbase4:36633] zookeeper.ReadOnlyZKClient(139): Connect 0x02024450 to 127.0.0.1:52127 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 20:10:35,975 DEBUG [RS:3;jenkins-hbase4:36633] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@cc5aa60, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 20:10:35,975 DEBUG [RS:3;jenkins-hbase4:36633] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@8fa2f20, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 20:10:35,988 DEBUG [RS:3;jenkins-hbase4:36633] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:36633 2023-07-24 20:10:35,989 INFO [RS:3;jenkins-hbase4:36633] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 20:10:35,989 INFO [RS:3;jenkins-hbase4:36633] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 20:10:35,989 DEBUG [RS:3;jenkins-hbase4:36633] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 20:10:35,991 INFO [RS:3;jenkins-hbase4:36633] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43005,1690229429933 with isa=jenkins-hbase4.apache.org/172.31.14.131:36633, startcode=1690229435886 2023-07-24 20:10:35,992 DEBUG [RS:3;jenkins-hbase4:36633] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 20:10:35,997 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59661, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 20:10:35,997 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43005] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:35,997 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43005,1690229429933] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 20:10:35,998 DEBUG [RS:3;jenkins-hbase4:36633] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1 2023-07-24 20:10:35,998 DEBUG [RS:3;jenkins-hbase4:36633] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:32791 2023-07-24 20:10:35,998 DEBUG [RS:3;jenkins-hbase4:36633] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34897 2023-07-24 20:10:36,003 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:36167-0x10198f4d5ba0001, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:10:36,003 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:43675-0x10198f4d5ba0003, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:10:36,004 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43005,1690229429933] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:36,003 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:45007-0x10198f4d5ba0002, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:10:36,003 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:10:36,004 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43005,1690229429933] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 20:10:36,010 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36167-0x10198f4d5ba0001, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:36,010 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45007-0x10198f4d5ba0002, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:36,010 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36633,1690229435886] 2023-07-24 20:10:36,011 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36167-0x10198f4d5ba0001, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:36,012 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43675-0x10198f4d5ba0003, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:36,012 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45007-0x10198f4d5ba0002, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:36,011 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43005,1690229429933] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-24 20:10:36,012 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36167-0x10198f4d5ba0001, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:36,012 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43675-0x10198f4d5ba0003, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:36,012 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45007-0x10198f4d5ba0002, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:36,012 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36167-0x10198f4d5ba0001, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:36,013 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43675-0x10198f4d5ba0003, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:36,013 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45007-0x10198f4d5ba0002, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:36,014 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43675-0x10198f4d5ba0003, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:36,015 DEBUG [RS:3;jenkins-hbase4:36633] zookeeper.ZKUtil(162): regionserver:36633-0x10198f4d5ba000b, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:36,015 WARN [RS:3;jenkins-hbase4:36633] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 20:10:36,015 INFO [RS:3;jenkins-hbase4:36633] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 20:10:36,015 DEBUG [RS:3;jenkins-hbase4:36633] regionserver.HRegionServer(1948): logDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/WALs/jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:36,020 DEBUG [RS:3;jenkins-hbase4:36633] zookeeper.ZKUtil(162): regionserver:36633-0x10198f4d5ba000b, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:36,021 DEBUG [RS:3;jenkins-hbase4:36633] zookeeper.ZKUtil(162): regionserver:36633-0x10198f4d5ba000b, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:36,021 DEBUG [RS:3;jenkins-hbase4:36633] zookeeper.ZKUtil(162): regionserver:36633-0x10198f4d5ba000b, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:36,022 DEBUG [RS:3;jenkins-hbase4:36633] zookeeper.ZKUtil(162): regionserver:36633-0x10198f4d5ba000b, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:36,023 DEBUG [RS:3;jenkins-hbase4:36633] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 20:10:36,023 INFO [RS:3;jenkins-hbase4:36633] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 20:10:36,026 INFO [RS:3;jenkins-hbase4:36633] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 20:10:36,027 INFO [RS:3;jenkins-hbase4:36633] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 20:10:36,027 INFO [RS:3;jenkins-hbase4:36633] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:36,028 INFO [RS:3;jenkins-hbase4:36633] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 20:10:36,030 INFO [RS:3;jenkins-hbase4:36633] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:36,030 DEBUG [RS:3;jenkins-hbase4:36633] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:36,030 DEBUG [RS:3;jenkins-hbase4:36633] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:36,030 DEBUG [RS:3;jenkins-hbase4:36633] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:36,030 DEBUG [RS:3;jenkins-hbase4:36633] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:36,030 DEBUG [RS:3;jenkins-hbase4:36633] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:36,030 DEBUG [RS:3;jenkins-hbase4:36633] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 20:10:36,031 DEBUG [RS:3;jenkins-hbase4:36633] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:36,031 DEBUG [RS:3;jenkins-hbase4:36633] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:36,031 DEBUG [RS:3;jenkins-hbase4:36633] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:36,031 DEBUG [RS:3;jenkins-hbase4:36633] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:36,036 INFO [RS:3;jenkins-hbase4:36633] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:36,036 INFO [RS:3;jenkins-hbase4:36633] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:36,036 INFO [RS:3;jenkins-hbase4:36633] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:36,050 INFO [RS:3;jenkins-hbase4:36633] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 20:10:36,051 INFO [RS:3;jenkins-hbase4:36633] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36633,1690229435886-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:36,063 INFO [RS:3;jenkins-hbase4:36633] regionserver.Replication(203): jenkins-hbase4.apache.org,36633,1690229435886 started 2023-07-24 20:10:36,063 INFO [RS:3;jenkins-hbase4:36633] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36633,1690229435886, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36633, sessionid=0x10198f4d5ba000b 2023-07-24 20:10:36,064 DEBUG [RS:3;jenkins-hbase4:36633] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 20:10:36,064 DEBUG [RS:3;jenkins-hbase4:36633] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:36,064 DEBUG [RS:3;jenkins-hbase4:36633] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36633,1690229435886' 2023-07-24 20:10:36,064 DEBUG [RS:3;jenkins-hbase4:36633] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 20:10:36,064 DEBUG [RS:3;jenkins-hbase4:36633] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 20:10:36,068 DEBUG [RS:3;jenkins-hbase4:36633] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 20:10:36,068 DEBUG [RS:3;jenkins-hbase4:36633] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 20:10:36,068 DEBUG [RS:3;jenkins-hbase4:36633] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:36,068 DEBUG [RS:3;jenkins-hbase4:36633] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36633,1690229435886' 2023-07-24 20:10:36,068 DEBUG [RS:3;jenkins-hbase4:36633] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 20:10:36,069 DEBUG [RS:3;jenkins-hbase4:36633] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 20:10:36,070 DEBUG [RS:3;jenkins-hbase4:36633] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 20:10:36,070 INFO [RS:3;jenkins-hbase4:36633] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 20:10:36,070 INFO [RS:3;jenkins-hbase4:36633] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 20:10:36,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 20:10:36,075 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:36,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:36,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:10:36,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:10:36,095 DEBUG [hconnection-0x3f04a498-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 20:10:36,101 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38150, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 20:10:36,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:36,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:36,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43005] to rsgroup master 2023-07-24 20:10:36,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:36,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:60832 deadline: 1690230636124, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. 2023-07-24 20:10:36,127 WARN [Listener at localhost/39801] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 20:10:36,129 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:36,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:36,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:36,132 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:36633, jenkins-hbase4.apache.org:43675, jenkins-hbase4.apache.org:45007], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 20:10:36,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:10:36,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:36,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:10:36,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:36,141 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_1805923026 2023-07-24 20:10:36,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1805923026 2023-07-24 20:10:36,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:36,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:36,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:36,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:10:36,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:36,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:36,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:36633] to rsgroup Group_testTableMoveTruncateAndDrop_1805923026 2023-07-24 20:10:36,173 INFO [RS:3;jenkins-hbase4:36633] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36633%2C1690229435886, suffix=, logDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/WALs/jenkins-hbase4.apache.org,36633,1690229435886, archiveDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/oldWALs, maxLogs=32 2023-07-24 20:10:36,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1805923026 2023-07-24 20:10:36,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:36,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:36,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:36,184 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 20:10:36,184 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36167,1690229431901, jenkins-hbase4.apache.org,36633,1690229435886] are moved back to default 2023-07-24 20:10:36,184 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_1805923026 2023-07-24 20:10:36,184 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:36,190 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:36,190 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:36,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1805923026 2023-07-24 20:10:36,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:36,223 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41925,DS-4d145b76-665c-4122-a267-81459a7dcebb,DISK] 2023-07-24 20:10:36,224 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33397,DS-4c2bb9cf-42f6-4e01-8e40-6d0e320e306a,DISK] 2023-07-24 20:10:36,229 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37373,DS-c1ea1d19-77ac-4821-abab-73633b2c9332,DISK] 2023-07-24 20:10:36,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 20:10:36,235 INFO [RS:3;jenkins-hbase4:36633] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/WALs/jenkins-hbase4.apache.org,36633,1690229435886/jenkins-hbase4.apache.org%2C36633%2C1690229435886.1690229436175 2023-07-24 20:10:36,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 20:10:36,239 DEBUG [RS:3;jenkins-hbase4:36633] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41925,DS-4d145b76-665c-4122-a267-81459a7dcebb,DISK], DatanodeInfoWithStorage[127.0.0.1:33397,DS-4c2bb9cf-42f6-4e01-8e40-6d0e320e306a,DISK], DatanodeInfoWithStorage[127.0.0.1:37373,DS-c1ea1d19-77ac-4821-abab-73633b2c9332,DISK]] 2023-07-24 20:10:36,242 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 20:10:36,248 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1805923026 2023-07-24 20:10:36,249 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:36,250 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:36,250 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:36,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 12 2023-07-24 20:10:36,264 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 20:10:36,267 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 20:10:36,273 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7f7c014b739edee6fdbd3b8fa162c851 2023-07-24 20:10:36,273 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aa4b78994a4b4203048134b0354cde65 2023-07-24 20:10:36,274 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7f7c014b739edee6fdbd3b8fa162c851 empty. 2023-07-24 20:10:36,275 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aa4b78994a4b4203048134b0354cde65 empty. 2023-07-24 20:10:36,275 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7f7c014b739edee6fdbd3b8fa162c851 2023-07-24 20:10:36,279 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ff1e4e1887bab7297600acc8dbc02745 2023-07-24 20:10:36,279 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aa4b78994a4b4203048134b0354cde65 2023-07-24 20:10:36,283 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cb2aaef89817825c9985e97d9e2d9155 2023-07-24 20:10:36,286 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1aa93beecac72b56d3d112b7c2da42b4 2023-07-24 20:10:36,286 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ff1e4e1887bab7297600acc8dbc02745 empty. 2023-07-24 20:10:36,287 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cb2aaef89817825c9985e97d9e2d9155 empty. 2023-07-24 20:10:36,287 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ff1e4e1887bab7297600acc8dbc02745 2023-07-24 20:10:36,288 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cb2aaef89817825c9985e97d9e2d9155 2023-07-24 20:10:36,288 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1aa93beecac72b56d3d112b7c2da42b4 empty. 2023-07-24 20:10:36,289 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1aa93beecac72b56d3d112b7c2da42b4 2023-07-24 20:10:36,289 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-24 20:10:36,335 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-24 20:10:36,337 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7f7c014b739edee6fdbd3b8fa162c851, NAME => 'Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp 2023-07-24 20:10:36,339 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => aa4b78994a4b4203048134b0354cde65, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp 2023-07-24 20:10:36,339 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => ff1e4e1887bab7297600acc8dbc02745, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp 2023-07-24 20:10:36,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 20:10:36,402 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:36,403 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing aa4b78994a4b4203048134b0354cde65, disabling compactions & flushes 2023-07-24 20:10:36,403 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65. 2023-07-24 20:10:36,403 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65. 2023-07-24 20:10:36,404 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65. after waiting 0 ms 2023-07-24 20:10:36,404 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65. 2023-07-24 20:10:36,404 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65. 2023-07-24 20:10:36,404 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for aa4b78994a4b4203048134b0354cde65: 2023-07-24 20:10:36,405 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => cb2aaef89817825c9985e97d9e2d9155, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp 2023-07-24 20:10:36,412 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:36,412 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 7f7c014b739edee6fdbd3b8fa162c851, disabling compactions & flushes 2023-07-24 20:10:36,412 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851. 2023-07-24 20:10:36,412 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851. 2023-07-24 20:10:36,412 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851. after waiting 0 ms 2023-07-24 20:10:36,412 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851. 2023-07-24 20:10:36,412 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851. 2023-07-24 20:10:36,412 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 7f7c014b739edee6fdbd3b8fa162c851: 2023-07-24 20:10:36,413 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1aa93beecac72b56d3d112b7c2da42b4, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp 2023-07-24 20:10:36,414 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:36,419 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing ff1e4e1887bab7297600acc8dbc02745, disabling compactions & flushes 2023-07-24 20:10:36,419 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745. 2023-07-24 20:10:36,419 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745. 2023-07-24 20:10:36,419 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745. after waiting 0 ms 2023-07-24 20:10:36,419 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745. 2023-07-24 20:10:36,419 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745. 2023-07-24 20:10:36,419 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for ff1e4e1887bab7297600acc8dbc02745: 2023-07-24 20:10:36,439 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:36,440 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 1aa93beecac72b56d3d112b7c2da42b4, disabling compactions & flushes 2023-07-24 20:10:36,440 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4. 2023-07-24 20:10:36,440 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4. 2023-07-24 20:10:36,440 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4. after waiting 0 ms 2023-07-24 20:10:36,440 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4. 2023-07-24 20:10:36,440 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4. 2023-07-24 20:10:36,440 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 1aa93beecac72b56d3d112b7c2da42b4: 2023-07-24 20:10:36,443 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:36,443 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing cb2aaef89817825c9985e97d9e2d9155, disabling compactions & flushes 2023-07-24 20:10:36,443 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155. 2023-07-24 20:10:36,443 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155. 2023-07-24 20:10:36,443 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155. after waiting 0 ms 2023-07-24 20:10:36,443 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155. 2023-07-24 20:10:36,443 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155. 2023-07-24 20:10:36,443 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for cb2aaef89817825c9985e97d9e2d9155: 2023-07-24 20:10:36,449 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 20:10:36,451 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229436450"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229436450"}]},"ts":"1690229436450"} 2023-07-24 20:10:36,451 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229436450"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229436450"}]},"ts":"1690229436450"} 2023-07-24 20:10:36,451 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229436450"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229436450"}]},"ts":"1690229436450"} 2023-07-24 20:10:36,451 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229436450"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229436450"}]},"ts":"1690229436450"} 2023-07-24 20:10:36,452 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229436450"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229436450"}]},"ts":"1690229436450"} 2023-07-24 20:10:36,517 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-24 20:10:36,519 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 20:10:36,519 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229436519"}]},"ts":"1690229436519"} 2023-07-24 20:10:36,522 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-24 20:10:36,530 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 20:10:36,531 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 20:10:36,531 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 20:10:36,531 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 20:10:36,531 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7f7c014b739edee6fdbd3b8fa162c851, ASSIGN}, {pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aa4b78994a4b4203048134b0354cde65, ASSIGN}, {pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff1e4e1887bab7297600acc8dbc02745, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb2aaef89817825c9985e97d9e2d9155, ASSIGN}, {pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1aa93beecac72b56d3d112b7c2da42b4, ASSIGN}] 2023-07-24 20:10:36,535 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff1e4e1887bab7297600acc8dbc02745, ASSIGN 2023-07-24 20:10:36,536 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7f7c014b739edee6fdbd3b8fa162c851, ASSIGN 2023-07-24 20:10:36,536 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aa4b78994a4b4203048134b0354cde65, ASSIGN 2023-07-24 20:10:36,537 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb2aaef89817825c9985e97d9e2d9155, ASSIGN 2023-07-24 20:10:36,538 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff1e4e1887bab7297600acc8dbc02745, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43675,1690229432113; forceNewPlan=false, retain=false 2023-07-24 20:10:36,539 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7f7c014b739edee6fdbd3b8fa162c851, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45007,1690229432026; forceNewPlan=false, retain=false 2023-07-24 20:10:36,539 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aa4b78994a4b4203048134b0354cde65, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43675,1690229432113; forceNewPlan=false, retain=false 2023-07-24 20:10:36,540 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb2aaef89817825c9985e97d9e2d9155, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43675,1690229432113; forceNewPlan=false, retain=false 2023-07-24 20:10:36,541 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1aa93beecac72b56d3d112b7c2da42b4, ASSIGN 2023-07-24 20:10:36,543 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1aa93beecac72b56d3d112b7c2da42b4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45007,1690229432026; forceNewPlan=false, retain=false 2023-07-24 20:10:36,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 20:10:36,689 INFO [jenkins-hbase4:43005] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-24 20:10:36,692 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=aa4b78994a4b4203048134b0354cde65, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:36,692 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=cb2aaef89817825c9985e97d9e2d9155, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:36,692 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=1aa93beecac72b56d3d112b7c2da42b4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:36,692 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=7f7c014b739edee6fdbd3b8fa162c851, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:36,692 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229436692"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229436692"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229436692"}]},"ts":"1690229436692"} 2023-07-24 20:10:36,692 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=ff1e4e1887bab7297600acc8dbc02745, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:36,692 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229436692"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229436692"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229436692"}]},"ts":"1690229436692"} 2023-07-24 20:10:36,692 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229436692"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229436692"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229436692"}]},"ts":"1690229436692"} 2023-07-24 20:10:36,693 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229436692"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229436692"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229436692"}]},"ts":"1690229436692"} 2023-07-24 20:10:36,692 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229436692"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229436692"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229436692"}]},"ts":"1690229436692"} 2023-07-24 20:10:36,695 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=14, state=RUNNABLE; OpenRegionProcedure aa4b78994a4b4203048134b0354cde65, server=jenkins-hbase4.apache.org,43675,1690229432113}] 2023-07-24 20:10:36,697 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=13, state=RUNNABLE; OpenRegionProcedure 7f7c014b739edee6fdbd3b8fa162c851, server=jenkins-hbase4.apache.org,45007,1690229432026}] 2023-07-24 20:10:36,699 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=20, ppid=16, state=RUNNABLE; OpenRegionProcedure cb2aaef89817825c9985e97d9e2d9155, server=jenkins-hbase4.apache.org,43675,1690229432113}] 2023-07-24 20:10:36,702 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=15, state=RUNNABLE; OpenRegionProcedure ff1e4e1887bab7297600acc8dbc02745, server=jenkins-hbase4.apache.org,43675,1690229432113}] 2023-07-24 20:10:36,703 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=17, state=RUNNABLE; OpenRegionProcedure 1aa93beecac72b56d3d112b7c2da42b4, server=jenkins-hbase4.apache.org,45007,1690229432026}] 2023-07-24 20:10:36,856 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851. 2023-07-24 20:10:36,856 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7f7c014b739edee6fdbd3b8fa162c851, NAME => 'Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-24 20:10:36,856 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745. 2023-07-24 20:10:36,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ff1e4e1887bab7297600acc8dbc02745, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-24 20:10:36,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 7f7c014b739edee6fdbd3b8fa162c851 2023-07-24 20:10:36,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:36,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7f7c014b739edee6fdbd3b8fa162c851 2023-07-24 20:10:36,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop ff1e4e1887bab7297600acc8dbc02745 2023-07-24 20:10:36,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7f7c014b739edee6fdbd3b8fa162c851 2023-07-24 20:10:36,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:36,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ff1e4e1887bab7297600acc8dbc02745 2023-07-24 20:10:36,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ff1e4e1887bab7297600acc8dbc02745 2023-07-24 20:10:36,859 INFO [StoreOpener-ff1e4e1887bab7297600acc8dbc02745-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ff1e4e1887bab7297600acc8dbc02745 2023-07-24 20:10:36,859 INFO [StoreOpener-7f7c014b739edee6fdbd3b8fa162c851-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7f7c014b739edee6fdbd3b8fa162c851 2023-07-24 20:10:36,861 DEBUG [StoreOpener-ff1e4e1887bab7297600acc8dbc02745-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/ff1e4e1887bab7297600acc8dbc02745/f 2023-07-24 20:10:36,861 DEBUG [StoreOpener-ff1e4e1887bab7297600acc8dbc02745-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/ff1e4e1887bab7297600acc8dbc02745/f 2023-07-24 20:10:36,863 INFO [StoreOpener-ff1e4e1887bab7297600acc8dbc02745-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ff1e4e1887bab7297600acc8dbc02745 columnFamilyName f 2023-07-24 20:10:36,863 DEBUG [StoreOpener-7f7c014b739edee6fdbd3b8fa162c851-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/7f7c014b739edee6fdbd3b8fa162c851/f 2023-07-24 20:10:36,863 DEBUG [StoreOpener-7f7c014b739edee6fdbd3b8fa162c851-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/7f7c014b739edee6fdbd3b8fa162c851/f 2023-07-24 20:10:36,863 INFO [StoreOpener-ff1e4e1887bab7297600acc8dbc02745-1] regionserver.HStore(310): Store=ff1e4e1887bab7297600acc8dbc02745/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:36,865 INFO [StoreOpener-7f7c014b739edee6fdbd3b8fa162c851-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7f7c014b739edee6fdbd3b8fa162c851 columnFamilyName f 2023-07-24 20:10:36,866 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/ff1e4e1887bab7297600acc8dbc02745 2023-07-24 20:10:36,867 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/ff1e4e1887bab7297600acc8dbc02745 2023-07-24 20:10:36,869 INFO [StoreOpener-7f7c014b739edee6fdbd3b8fa162c851-1] regionserver.HStore(310): Store=7f7c014b739edee6fdbd3b8fa162c851/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:36,871 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/7f7c014b739edee6fdbd3b8fa162c851 2023-07-24 20:10:36,871 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/7f7c014b739edee6fdbd3b8fa162c851 2023-07-24 20:10:36,873 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ff1e4e1887bab7297600acc8dbc02745 2023-07-24 20:10:36,876 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7f7c014b739edee6fdbd3b8fa162c851 2023-07-24 20:10:36,878 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/ff1e4e1887bab7297600acc8dbc02745/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:10:36,879 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ff1e4e1887bab7297600acc8dbc02745; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10106142240, jitterRate=-0.05879215896129608}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:36,879 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/7f7c014b739edee6fdbd3b8fa162c851/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:10:36,879 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ff1e4e1887bab7297600acc8dbc02745: 2023-07-24 20:10:36,881 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7f7c014b739edee6fdbd3b8fa162c851; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10754430240, jitterRate=0.001584365963935852}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:36,881 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7f7c014b739edee6fdbd3b8fa162c851: 2023-07-24 20:10:36,883 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851., pid=19, masterSystemTime=1690229436850 2023-07-24 20:10:36,887 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745., pid=21, masterSystemTime=1690229436849 2023-07-24 20:10:36,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851. 2023-07-24 20:10:36,887 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851. 2023-07-24 20:10:36,888 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4. 2023-07-24 20:10:36,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1aa93beecac72b56d3d112b7c2da42b4, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-24 20:10:36,889 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 1aa93beecac72b56d3d112b7c2da42b4 2023-07-24 20:10:36,889 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:36,889 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1aa93beecac72b56d3d112b7c2da42b4 2023-07-24 20:10:36,889 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1aa93beecac72b56d3d112b7c2da42b4 2023-07-24 20:10:36,889 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=7f7c014b739edee6fdbd3b8fa162c851, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:36,889 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229436889"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229436889"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229436889"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229436889"}]},"ts":"1690229436889"} 2023-07-24 20:10:36,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 20:10:36,892 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745. 2023-07-24 20:10:36,893 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745. 2023-07-24 20:10:36,893 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155. 2023-07-24 20:10:36,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cb2aaef89817825c9985e97d9e2d9155, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-24 20:10:36,894 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=ff1e4e1887bab7297600acc8dbc02745, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:36,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop cb2aaef89817825c9985e97d9e2d9155 2023-07-24 20:10:36,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:36,894 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229436894"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229436894"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229436894"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229436894"}]},"ts":"1690229436894"} 2023-07-24 20:10:36,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cb2aaef89817825c9985e97d9e2d9155 2023-07-24 20:10:36,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cb2aaef89817825c9985e97d9e2d9155 2023-07-24 20:10:36,902 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=13 2023-07-24 20:10:36,905 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7f7c014b739edee6fdbd3b8fa162c851, ASSIGN in 371 msec 2023-07-24 20:10:36,908 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=13, state=SUCCESS; OpenRegionProcedure 7f7c014b739edee6fdbd3b8fa162c851, server=jenkins-hbase4.apache.org,45007,1690229432026 in 198 msec 2023-07-24 20:10:36,905 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=15 2023-07-24 20:10:36,908 INFO [StoreOpener-cb2aaef89817825c9985e97d9e2d9155-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cb2aaef89817825c9985e97d9e2d9155 2023-07-24 20:10:36,908 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=15, state=SUCCESS; OpenRegionProcedure ff1e4e1887bab7297600acc8dbc02745, server=jenkins-hbase4.apache.org,43675,1690229432113 in 197 msec 2023-07-24 20:10:36,909 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff1e4e1887bab7297600acc8dbc02745, ASSIGN in 374 msec 2023-07-24 20:10:36,911 INFO [StoreOpener-1aa93beecac72b56d3d112b7c2da42b4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1aa93beecac72b56d3d112b7c2da42b4 2023-07-24 20:10:36,912 DEBUG [StoreOpener-cb2aaef89817825c9985e97d9e2d9155-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/cb2aaef89817825c9985e97d9e2d9155/f 2023-07-24 20:10:36,912 DEBUG [StoreOpener-cb2aaef89817825c9985e97d9e2d9155-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/cb2aaef89817825c9985e97d9e2d9155/f 2023-07-24 20:10:36,913 INFO [StoreOpener-cb2aaef89817825c9985e97d9e2d9155-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cb2aaef89817825c9985e97d9e2d9155 columnFamilyName f 2023-07-24 20:10:36,913 DEBUG [StoreOpener-1aa93beecac72b56d3d112b7c2da42b4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/1aa93beecac72b56d3d112b7c2da42b4/f 2023-07-24 20:10:36,913 DEBUG [StoreOpener-1aa93beecac72b56d3d112b7c2da42b4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/1aa93beecac72b56d3d112b7c2da42b4/f 2023-07-24 20:10:36,913 INFO [StoreOpener-1aa93beecac72b56d3d112b7c2da42b4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1aa93beecac72b56d3d112b7c2da42b4 columnFamilyName f 2023-07-24 20:10:36,913 INFO [StoreOpener-cb2aaef89817825c9985e97d9e2d9155-1] regionserver.HStore(310): Store=cb2aaef89817825c9985e97d9e2d9155/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:36,915 INFO [StoreOpener-1aa93beecac72b56d3d112b7c2da42b4-1] regionserver.HStore(310): Store=1aa93beecac72b56d3d112b7c2da42b4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:36,917 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/1aa93beecac72b56d3d112b7c2da42b4 2023-07-24 20:10:36,917 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/cb2aaef89817825c9985e97d9e2d9155 2023-07-24 20:10:36,918 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/1aa93beecac72b56d3d112b7c2da42b4 2023-07-24 20:10:36,918 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/cb2aaef89817825c9985e97d9e2d9155 2023-07-24 20:10:36,927 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1aa93beecac72b56d3d112b7c2da42b4 2023-07-24 20:10:36,929 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cb2aaef89817825c9985e97d9e2d9155 2023-07-24 20:10:36,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/1aa93beecac72b56d3d112b7c2da42b4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:10:36,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/cb2aaef89817825c9985e97d9e2d9155/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:10:36,933 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1aa93beecac72b56d3d112b7c2da42b4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10022499840, jitterRate=-0.06658196449279785}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:36,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1aa93beecac72b56d3d112b7c2da42b4: 2023-07-24 20:10:36,934 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cb2aaef89817825c9985e97d9e2d9155; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10302520000, jitterRate=-0.04050305485725403}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:36,934 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cb2aaef89817825c9985e97d9e2d9155: 2023-07-24 20:10:36,935 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4., pid=22, masterSystemTime=1690229436850 2023-07-24 20:10:36,935 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155., pid=20, masterSystemTime=1690229436849 2023-07-24 20:10:36,937 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4. 2023-07-24 20:10:36,937 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4. 2023-07-24 20:10:36,938 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=1aa93beecac72b56d3d112b7c2da42b4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:36,939 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229436938"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229436938"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229436938"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229436938"}]},"ts":"1690229436938"} 2023-07-24 20:10:36,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155. 2023-07-24 20:10:36,940 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155. 2023-07-24 20:10:36,940 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65. 2023-07-24 20:10:36,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => aa4b78994a4b4203048134b0354cde65, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-24 20:10:36,941 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=cb2aaef89817825c9985e97d9e2d9155, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:36,941 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop aa4b78994a4b4203048134b0354cde65 2023-07-24 20:10:36,942 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:36,942 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for aa4b78994a4b4203048134b0354cde65 2023-07-24 20:10:36,942 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229436940"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229436940"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229436940"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229436940"}]},"ts":"1690229436940"} 2023-07-24 20:10:36,942 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for aa4b78994a4b4203048134b0354cde65 2023-07-24 20:10:36,948 INFO [StoreOpener-aa4b78994a4b4203048134b0354cde65-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region aa4b78994a4b4203048134b0354cde65 2023-07-24 20:10:36,951 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=17 2023-07-24 20:10:36,951 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=17, state=SUCCESS; OpenRegionProcedure 1aa93beecac72b56d3d112b7c2da42b4, server=jenkins-hbase4.apache.org,45007,1690229432026 in 241 msec 2023-07-24 20:10:36,954 DEBUG [StoreOpener-aa4b78994a4b4203048134b0354cde65-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/aa4b78994a4b4203048134b0354cde65/f 2023-07-24 20:10:36,954 DEBUG [StoreOpener-aa4b78994a4b4203048134b0354cde65-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/aa4b78994a4b4203048134b0354cde65/f 2023-07-24 20:10:36,955 INFO [StoreOpener-aa4b78994a4b4203048134b0354cde65-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region aa4b78994a4b4203048134b0354cde65 columnFamilyName f 2023-07-24 20:10:36,956 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=16 2023-07-24 20:10:36,956 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=16, state=SUCCESS; OpenRegionProcedure cb2aaef89817825c9985e97d9e2d9155, server=jenkins-hbase4.apache.org,43675,1690229432113 in 247 msec 2023-07-24 20:10:36,962 INFO [StoreOpener-aa4b78994a4b4203048134b0354cde65-1] regionserver.HStore(310): Store=aa4b78994a4b4203048134b0354cde65/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:36,964 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1aa93beecac72b56d3d112b7c2da42b4, ASSIGN in 420 msec 2023-07-24 20:10:36,965 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/aa4b78994a4b4203048134b0354cde65 2023-07-24 20:10:36,965 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb2aaef89817825c9985e97d9e2d9155, ASSIGN in 425 msec 2023-07-24 20:10:36,966 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/aa4b78994a4b4203048134b0354cde65 2023-07-24 20:10:36,970 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for aa4b78994a4b4203048134b0354cde65 2023-07-24 20:10:36,979 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/aa4b78994a4b4203048134b0354cde65/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:10:36,981 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened aa4b78994a4b4203048134b0354cde65; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9409558080, jitterRate=-0.12366661429405212}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:36,981 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for aa4b78994a4b4203048134b0354cde65: 2023-07-24 20:10:36,982 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65., pid=18, masterSystemTime=1690229436849 2023-07-24 20:10:36,985 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65. 2023-07-24 20:10:36,985 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65. 2023-07-24 20:10:36,986 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=aa4b78994a4b4203048134b0354cde65, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:36,986 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229436985"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229436985"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229436985"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229436985"}]},"ts":"1690229436985"} 2023-07-24 20:10:36,992 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=14 2023-07-24 20:10:36,993 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=14, state=SUCCESS; OpenRegionProcedure aa4b78994a4b4203048134b0354cde65, server=jenkins-hbase4.apache.org,43675,1690229432113 in 294 msec 2023-07-24 20:10:36,996 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-24 20:10:36,996 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aa4b78994a4b4203048134b0354cde65, ASSIGN in 462 msec 2023-07-24 20:10:36,998 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 20:10:36,999 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229436999"}]},"ts":"1690229436999"} 2023-07-24 20:10:37,002 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-24 20:10:37,011 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 20:10:37,016 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 777 msec 2023-07-24 20:10:37,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 20:10:37,394 INFO [Listener at localhost/39801] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 12 completed 2023-07-24 20:10:37,394 DEBUG [Listener at localhost/39801] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-24 20:10:37,395 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:37,401 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-24 20:10:37,402 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:37,402 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-24 20:10:37,403 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:37,408 DEBUG [Listener at localhost/39801] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 20:10:37,413 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60696, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 20:10:37,417 DEBUG [Listener at localhost/39801] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 20:10:37,422 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41474, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 20:10:37,423 DEBUG [Listener at localhost/39801] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 20:10:37,428 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35398, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 20:10:37,430 DEBUG [Listener at localhost/39801] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 20:10:37,435 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38154, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 20:10:37,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-24 20:10:37,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 20:10:37,452 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_1805923026 2023-07-24 20:10:37,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_1805923026 2023-07-24 20:10:37,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1805923026 2023-07-24 20:10:37,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:37,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:37,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:37,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_1805923026 2023-07-24 20:10:37,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(345): Moving region 7f7c014b739edee6fdbd3b8fa162c851 to RSGroup Group_testTableMoveTruncateAndDrop_1805923026 2023-07-24 20:10:37,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 20:10:37,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 20:10:37,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 20:10:37,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 20:10:37,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 20:10:37,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7f7c014b739edee6fdbd3b8fa162c851, REOPEN/MOVE 2023-07-24 20:10:37,479 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7f7c014b739edee6fdbd3b8fa162c851, REOPEN/MOVE 2023-07-24 20:10:37,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(345): Moving region aa4b78994a4b4203048134b0354cde65 to RSGroup Group_testTableMoveTruncateAndDrop_1805923026 2023-07-24 20:10:37,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 20:10:37,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 20:10:37,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 20:10:37,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 20:10:37,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 20:10:37,481 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=7f7c014b739edee6fdbd3b8fa162c851, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:37,481 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229437481"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229437481"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229437481"}]},"ts":"1690229437481"} 2023-07-24 20:10:37,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aa4b78994a4b4203048134b0354cde65, REOPEN/MOVE 2023-07-24 20:10:37,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(345): Moving region ff1e4e1887bab7297600acc8dbc02745 to RSGroup Group_testTableMoveTruncateAndDrop_1805923026 2023-07-24 20:10:37,483 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aa4b78994a4b4203048134b0354cde65, REOPEN/MOVE 2023-07-24 20:10:37,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 20:10:37,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 20:10:37,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 20:10:37,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 20:10:37,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 20:10:37,485 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=aa4b78994a4b4203048134b0354cde65, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:37,485 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=26, ppid=23, state=RUNNABLE; CloseRegionProcedure 7f7c014b739edee6fdbd3b8fa162c851, server=jenkins-hbase4.apache.org,45007,1690229432026}] 2023-07-24 20:10:37,485 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229437485"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229437485"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229437485"}]},"ts":"1690229437485"} 2023-07-24 20:10:37,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff1e4e1887bab7297600acc8dbc02745, REOPEN/MOVE 2023-07-24 20:10:37,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(345): Moving region cb2aaef89817825c9985e97d9e2d9155 to RSGroup Group_testTableMoveTruncateAndDrop_1805923026 2023-07-24 20:10:37,489 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff1e4e1887bab7297600acc8dbc02745, REOPEN/MOVE 2023-07-24 20:10:37,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 20:10:37,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 20:10:37,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 20:10:37,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 20:10:37,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 20:10:37,491 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=ff1e4e1887bab7297600acc8dbc02745, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:37,491 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229437491"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229437491"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229437491"}]},"ts":"1690229437491"} 2023-07-24 20:10:37,492 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=24, state=RUNNABLE; CloseRegionProcedure aa4b78994a4b4203048134b0354cde65, server=jenkins-hbase4.apache.org,43675,1690229432113}] 2023-07-24 20:10:37,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb2aaef89817825c9985e97d9e2d9155, REOPEN/MOVE 2023-07-24 20:10:37,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(345): Moving region 1aa93beecac72b56d3d112b7c2da42b4 to RSGroup Group_testTableMoveTruncateAndDrop_1805923026 2023-07-24 20:10:37,494 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb2aaef89817825c9985e97d9e2d9155, REOPEN/MOVE 2023-07-24 20:10:37,495 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 20:10:37,495 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 20:10:37,495 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 20:10:37,495 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 20:10:37,495 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 20:10:37,496 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=30, ppid=25, state=RUNNABLE; CloseRegionProcedure ff1e4e1887bab7297600acc8dbc02745, server=jenkins-hbase4.apache.org,43675,1690229432113}] 2023-07-24 20:10:37,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1aa93beecac72b56d3d112b7c2da42b4, REOPEN/MOVE 2023-07-24 20:10:37,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_1805923026, current retry=0 2023-07-24 20:10:37,498 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=cb2aaef89817825c9985e97d9e2d9155, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:37,499 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229437497"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229437497"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229437497"}]},"ts":"1690229437497"} 2023-07-24 20:10:37,501 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1aa93beecac72b56d3d112b7c2da42b4, REOPEN/MOVE 2023-07-24 20:10:37,504 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=27, state=RUNNABLE; CloseRegionProcedure cb2aaef89817825c9985e97d9e2d9155, server=jenkins-hbase4.apache.org,43675,1690229432113}] 2023-07-24 20:10:37,504 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=1aa93beecac72b56d3d112b7c2da42b4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:37,505 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229437504"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229437504"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229437504"}]},"ts":"1690229437504"} 2023-07-24 20:10:37,509 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=32, ppid=29, state=RUNNABLE; CloseRegionProcedure 1aa93beecac72b56d3d112b7c2da42b4, server=jenkins-hbase4.apache.org,45007,1690229432026}] 2023-07-24 20:10:37,657 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7f7c014b739edee6fdbd3b8fa162c851 2023-07-24 20:10:37,659 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7f7c014b739edee6fdbd3b8fa162c851, disabling compactions & flushes 2023-07-24 20:10:37,659 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ff1e4e1887bab7297600acc8dbc02745 2023-07-24 20:10:37,659 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851. 2023-07-24 20:10:37,659 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851. 2023-07-24 20:10:37,660 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ff1e4e1887bab7297600acc8dbc02745, disabling compactions & flushes 2023-07-24 20:10:37,660 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851. after waiting 0 ms 2023-07-24 20:10:37,660 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745. 2023-07-24 20:10:37,660 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851. 2023-07-24 20:10:37,660 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745. 2023-07-24 20:10:37,660 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745. after waiting 0 ms 2023-07-24 20:10:37,660 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745. 2023-07-24 20:10:37,675 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/7f7c014b739edee6fdbd3b8fa162c851/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 20:10:37,676 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851. 2023-07-24 20:10:37,676 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7f7c014b739edee6fdbd3b8fa162c851: 2023-07-24 20:10:37,676 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 7f7c014b739edee6fdbd3b8fa162c851 move to jenkins-hbase4.apache.org,36633,1690229435886 record at close sequenceid=2 2023-07-24 20:10:37,677 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/ff1e4e1887bab7297600acc8dbc02745/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 20:10:37,679 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745. 2023-07-24 20:10:37,679 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ff1e4e1887bab7297600acc8dbc02745: 2023-07-24 20:10:37,679 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding ff1e4e1887bab7297600acc8dbc02745 move to jenkins-hbase4.apache.org,36167,1690229431901 record at close sequenceid=2 2023-07-24 20:10:37,680 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7f7c014b739edee6fdbd3b8fa162c851 2023-07-24 20:10:37,680 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1aa93beecac72b56d3d112b7c2da42b4 2023-07-24 20:10:37,683 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1aa93beecac72b56d3d112b7c2da42b4, disabling compactions & flushes 2023-07-24 20:10:37,684 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4. 2023-07-24 20:10:37,684 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4. 2023-07-24 20:10:37,684 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4. after waiting 0 ms 2023-07-24 20:10:37,684 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4. 2023-07-24 20:10:37,688 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=7f7c014b739edee6fdbd3b8fa162c851, regionState=CLOSED 2023-07-24 20:10:37,690 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ff1e4e1887bab7297600acc8dbc02745 2023-07-24 20:10:37,690 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close aa4b78994a4b4203048134b0354cde65 2023-07-24 20:10:37,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing aa4b78994a4b4203048134b0354cde65, disabling compactions & flushes 2023-07-24 20:10:37,691 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65. 2023-07-24 20:10:37,691 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229437688"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229437688"}]},"ts":"1690229437688"} 2023-07-24 20:10:37,691 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65. 2023-07-24 20:10:37,691 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65. after waiting 0 ms 2023-07-24 20:10:37,691 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65. 2023-07-24 20:10:37,692 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=ff1e4e1887bab7297600acc8dbc02745, regionState=CLOSED 2023-07-24 20:10:37,692 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229437692"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229437692"}]},"ts":"1690229437692"} 2023-07-24 20:10:37,699 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=26, resume processing ppid=23 2023-07-24 20:10:37,699 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=23, state=SUCCESS; CloseRegionProcedure 7f7c014b739edee6fdbd3b8fa162c851, server=jenkins-hbase4.apache.org,45007,1690229432026 in 209 msec 2023-07-24 20:10:37,700 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=30, resume processing ppid=25 2023-07-24 20:10:37,700 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=30, ppid=25, state=SUCCESS; CloseRegionProcedure ff1e4e1887bab7297600acc8dbc02745, server=jenkins-hbase4.apache.org,43675,1690229432113 in 200 msec 2023-07-24 20:10:37,700 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7f7c014b739edee6fdbd3b8fa162c851, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36633,1690229435886; forceNewPlan=false, retain=false 2023-07-24 20:10:37,702 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff1e4e1887bab7297600acc8dbc02745, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36167,1690229431901; forceNewPlan=false, retain=false 2023-07-24 20:10:37,705 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/aa4b78994a4b4203048134b0354cde65/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 20:10:37,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/1aa93beecac72b56d3d112b7c2da42b4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 20:10:37,706 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65. 2023-07-24 20:10:37,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for aa4b78994a4b4203048134b0354cde65: 2023-07-24 20:10:37,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding aa4b78994a4b4203048134b0354cde65 move to jenkins-hbase4.apache.org,36633,1690229435886 record at close sequenceid=2 2023-07-24 20:10:37,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4. 2023-07-24 20:10:37,707 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1aa93beecac72b56d3d112b7c2da42b4: 2023-07-24 20:10:37,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1aa93beecac72b56d3d112b7c2da42b4 move to jenkins-hbase4.apache.org,36633,1690229435886 record at close sequenceid=2 2023-07-24 20:10:37,711 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed aa4b78994a4b4203048134b0354cde65 2023-07-24 20:10:37,711 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cb2aaef89817825c9985e97d9e2d9155 2023-07-24 20:10:37,712 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cb2aaef89817825c9985e97d9e2d9155, disabling compactions & flushes 2023-07-24 20:10:37,712 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155. 2023-07-24 20:10:37,712 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=aa4b78994a4b4203048134b0354cde65, regionState=CLOSED 2023-07-24 20:10:37,712 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155. 2023-07-24 20:10:37,712 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155. after waiting 0 ms 2023-07-24 20:10:37,712 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155. 2023-07-24 20:10:37,712 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229437712"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229437712"}]},"ts":"1690229437712"} 2023-07-24 20:10:37,713 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1aa93beecac72b56d3d112b7c2da42b4 2023-07-24 20:10:37,714 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=1aa93beecac72b56d3d112b7c2da42b4, regionState=CLOSED 2023-07-24 20:10:37,714 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229437714"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229437714"}]},"ts":"1690229437714"} 2023-07-24 20:10:37,720 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/cb2aaef89817825c9985e97d9e2d9155/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 20:10:37,721 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155. 2023-07-24 20:10:37,722 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=24 2023-07-24 20:10:37,722 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cb2aaef89817825c9985e97d9e2d9155: 2023-07-24 20:10:37,722 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=24, state=SUCCESS; CloseRegionProcedure aa4b78994a4b4203048134b0354cde65, server=jenkins-hbase4.apache.org,43675,1690229432113 in 223 msec 2023-07-24 20:10:37,722 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding cb2aaef89817825c9985e97d9e2d9155 move to jenkins-hbase4.apache.org,36167,1690229431901 record at close sequenceid=2 2023-07-24 20:10:37,723 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=32, resume processing ppid=29 2023-07-24 20:10:37,723 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=29, state=SUCCESS; CloseRegionProcedure 1aa93beecac72b56d3d112b7c2da42b4, server=jenkins-hbase4.apache.org,45007,1690229432026 in 209 msec 2023-07-24 20:10:37,724 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aa4b78994a4b4203048134b0354cde65, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36633,1690229435886; forceNewPlan=false, retain=false 2023-07-24 20:10:37,724 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1aa93beecac72b56d3d112b7c2da42b4, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36633,1690229435886; forceNewPlan=false, retain=false 2023-07-24 20:10:37,725 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cb2aaef89817825c9985e97d9e2d9155 2023-07-24 20:10:37,726 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=cb2aaef89817825c9985e97d9e2d9155, regionState=CLOSED 2023-07-24 20:10:37,726 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229437726"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229437726"}]},"ts":"1690229437726"} 2023-07-24 20:10:37,731 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=27 2023-07-24 20:10:37,731 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=27, state=SUCCESS; CloseRegionProcedure cb2aaef89817825c9985e97d9e2d9155, server=jenkins-hbase4.apache.org,43675,1690229432113 in 224 msec 2023-07-24 20:10:37,732 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb2aaef89817825c9985e97d9e2d9155, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36167,1690229431901; forceNewPlan=false, retain=false 2023-07-24 20:10:37,851 INFO [jenkins-hbase4:43005] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-24 20:10:37,852 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=1aa93beecac72b56d3d112b7c2da42b4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:37,852 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=7f7c014b739edee6fdbd3b8fa162c851, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:37,852 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=ff1e4e1887bab7297600acc8dbc02745, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:37,852 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=aa4b78994a4b4203048134b0354cde65, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:37,852 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229437852"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229437852"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229437852"}]},"ts":"1690229437852"} 2023-07-24 20:10:37,852 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229437851"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229437851"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229437851"}]},"ts":"1690229437851"} 2023-07-24 20:10:37,852 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229437851"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229437851"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229437851"}]},"ts":"1690229437851"} 2023-07-24 20:10:37,852 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229437851"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229437851"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229437851"}]},"ts":"1690229437851"} 2023-07-24 20:10:37,852 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=cb2aaef89817825c9985e97d9e2d9155, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:37,854 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229437852"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229437852"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229437852"}]},"ts":"1690229437852"} 2023-07-24 20:10:37,855 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=33, ppid=24, state=RUNNABLE; OpenRegionProcedure aa4b78994a4b4203048134b0354cde65, server=jenkins-hbase4.apache.org,36633,1690229435886}] 2023-07-24 20:10:37,857 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=25, state=RUNNABLE; OpenRegionProcedure ff1e4e1887bab7297600acc8dbc02745, server=jenkins-hbase4.apache.org,36167,1690229431901}] 2023-07-24 20:10:37,859 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=23, state=RUNNABLE; OpenRegionProcedure 7f7c014b739edee6fdbd3b8fa162c851, server=jenkins-hbase4.apache.org,36633,1690229435886}] 2023-07-24 20:10:37,864 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=29, state=RUNNABLE; OpenRegionProcedure 1aa93beecac72b56d3d112b7c2da42b4, server=jenkins-hbase4.apache.org,36633,1690229435886}] 2023-07-24 20:10:37,867 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=27, state=RUNNABLE; OpenRegionProcedure cb2aaef89817825c9985e97d9e2d9155, server=jenkins-hbase4.apache.org,36167,1690229431901}] 2023-07-24 20:10:38,010 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:38,011 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 20:10:38,011 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:38,012 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 20:10:38,012 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41484, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 20:10:38,017 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60706, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 20:10:38,023 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4. 2023-07-24 20:10:38,023 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1aa93beecac72b56d3d112b7c2da42b4, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-24 20:10:38,023 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 1aa93beecac72b56d3d112b7c2da42b4 2023-07-24 20:10:38,023 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:38,023 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1aa93beecac72b56d3d112b7c2da42b4 2023-07-24 20:10:38,023 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1aa93beecac72b56d3d112b7c2da42b4 2023-07-24 20:10:38,027 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745. 2023-07-24 20:10:38,027 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ff1e4e1887bab7297600acc8dbc02745, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-24 20:10:38,028 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop ff1e4e1887bab7297600acc8dbc02745 2023-07-24 20:10:38,028 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:38,028 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ff1e4e1887bab7297600acc8dbc02745 2023-07-24 20:10:38,028 INFO [StoreOpener-1aa93beecac72b56d3d112b7c2da42b4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1aa93beecac72b56d3d112b7c2da42b4 2023-07-24 20:10:38,028 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ff1e4e1887bab7297600acc8dbc02745 2023-07-24 20:10:38,029 DEBUG [StoreOpener-1aa93beecac72b56d3d112b7c2da42b4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/1aa93beecac72b56d3d112b7c2da42b4/f 2023-07-24 20:10:38,029 DEBUG [StoreOpener-1aa93beecac72b56d3d112b7c2da42b4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/1aa93beecac72b56d3d112b7c2da42b4/f 2023-07-24 20:10:38,030 INFO [StoreOpener-ff1e4e1887bab7297600acc8dbc02745-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ff1e4e1887bab7297600acc8dbc02745 2023-07-24 20:10:38,030 INFO [StoreOpener-1aa93beecac72b56d3d112b7c2da42b4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1aa93beecac72b56d3d112b7c2da42b4 columnFamilyName f 2023-07-24 20:10:38,031 INFO [StoreOpener-1aa93beecac72b56d3d112b7c2da42b4-1] regionserver.HStore(310): Store=1aa93beecac72b56d3d112b7c2da42b4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:38,032 DEBUG [StoreOpener-ff1e4e1887bab7297600acc8dbc02745-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/ff1e4e1887bab7297600acc8dbc02745/f 2023-07-24 20:10:38,033 DEBUG [StoreOpener-ff1e4e1887bab7297600acc8dbc02745-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/ff1e4e1887bab7297600acc8dbc02745/f 2023-07-24 20:10:38,033 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/1aa93beecac72b56d3d112b7c2da42b4 2023-07-24 20:10:38,035 INFO [StoreOpener-ff1e4e1887bab7297600acc8dbc02745-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ff1e4e1887bab7297600acc8dbc02745 columnFamilyName f 2023-07-24 20:10:38,035 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/1aa93beecac72b56d3d112b7c2da42b4 2023-07-24 20:10:38,035 INFO [StoreOpener-ff1e4e1887bab7297600acc8dbc02745-1] regionserver.HStore(310): Store=ff1e4e1887bab7297600acc8dbc02745/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:38,038 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/ff1e4e1887bab7297600acc8dbc02745 2023-07-24 20:10:38,040 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/ff1e4e1887bab7297600acc8dbc02745 2023-07-24 20:10:38,042 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1aa93beecac72b56d3d112b7c2da42b4 2023-07-24 20:10:38,044 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ff1e4e1887bab7297600acc8dbc02745 2023-07-24 20:10:38,045 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1aa93beecac72b56d3d112b7c2da42b4; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9916022240, jitterRate=-0.07649846374988556}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:38,046 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1aa93beecac72b56d3d112b7c2da42b4: 2023-07-24 20:10:38,046 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ff1e4e1887bab7297600acc8dbc02745; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10446611840, jitterRate=-0.02708345651626587}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:38,046 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ff1e4e1887bab7297600acc8dbc02745: 2023-07-24 20:10:38,047 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745., pid=34, masterSystemTime=1690229438011 2023-07-24 20:10:38,053 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4., pid=36, masterSystemTime=1690229438010 2023-07-24 20:10:38,058 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745. 2023-07-24 20:10:38,058 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745. 2023-07-24 20:10:38,058 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155. 2023-07-24 20:10:38,059 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=ff1e4e1887bab7297600acc8dbc02745, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:38,059 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cb2aaef89817825c9985e97d9e2d9155, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-24 20:10:38,059 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229438058"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229438058"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229438058"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229438058"}]},"ts":"1690229438058"} 2023-07-24 20:10:38,059 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop cb2aaef89817825c9985e97d9e2d9155 2023-07-24 20:10:38,059 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:38,059 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cb2aaef89817825c9985e97d9e2d9155 2023-07-24 20:10:38,060 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cb2aaef89817825c9985e97d9e2d9155 2023-07-24 20:10:38,060 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4. 2023-07-24 20:10:38,061 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=1aa93beecac72b56d3d112b7c2da42b4, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:38,063 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4. 2023-07-24 20:10:38,063 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229438061"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229438061"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229438061"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229438061"}]},"ts":"1690229438061"} 2023-07-24 20:10:38,063 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851. 2023-07-24 20:10:38,063 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7f7c014b739edee6fdbd3b8fa162c851, NAME => 'Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-24 20:10:38,064 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 7f7c014b739edee6fdbd3b8fa162c851 2023-07-24 20:10:38,064 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:38,064 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7f7c014b739edee6fdbd3b8fa162c851 2023-07-24 20:10:38,064 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7f7c014b739edee6fdbd3b8fa162c851 2023-07-24 20:10:38,066 INFO [StoreOpener-cb2aaef89817825c9985e97d9e2d9155-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cb2aaef89817825c9985e97d9e2d9155 2023-07-24 20:10:38,068 DEBUG [StoreOpener-cb2aaef89817825c9985e97d9e2d9155-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/cb2aaef89817825c9985e97d9e2d9155/f 2023-07-24 20:10:38,069 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=25 2023-07-24 20:10:38,069 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=25, state=SUCCESS; OpenRegionProcedure ff1e4e1887bab7297600acc8dbc02745, server=jenkins-hbase4.apache.org,36167,1690229431901 in 207 msec 2023-07-24 20:10:38,069 DEBUG [StoreOpener-cb2aaef89817825c9985e97d9e2d9155-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/cb2aaef89817825c9985e97d9e2d9155/f 2023-07-24 20:10:38,070 INFO [StoreOpener-cb2aaef89817825c9985e97d9e2d9155-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cb2aaef89817825c9985e97d9e2d9155 columnFamilyName f 2023-07-24 20:10:38,071 INFO [StoreOpener-cb2aaef89817825c9985e97d9e2d9155-1] regionserver.HStore(310): Store=cb2aaef89817825c9985e97d9e2d9155/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:38,073 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=25, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff1e4e1887bab7297600acc8dbc02745, REOPEN/MOVE in 584 msec 2023-07-24 20:10:38,083 INFO [StoreOpener-7f7c014b739edee6fdbd3b8fa162c851-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7f7c014b739edee6fdbd3b8fa162c851 2023-07-24 20:10:38,077 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=29, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1aa93beecac72b56d3d112b7c2da42b4, REOPEN/MOVE in 579 msec 2023-07-24 20:10:38,074 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=29 2023-07-24 20:10:38,085 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/cb2aaef89817825c9985e97d9e2d9155 2023-07-24 20:10:38,085 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=29, state=SUCCESS; OpenRegionProcedure 1aa93beecac72b56d3d112b7c2da42b4, server=jenkins-hbase4.apache.org,36633,1690229435886 in 204 msec 2023-07-24 20:10:38,087 DEBUG [StoreOpener-7f7c014b739edee6fdbd3b8fa162c851-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/7f7c014b739edee6fdbd3b8fa162c851/f 2023-07-24 20:10:38,087 DEBUG [StoreOpener-7f7c014b739edee6fdbd3b8fa162c851-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/7f7c014b739edee6fdbd3b8fa162c851/f 2023-07-24 20:10:38,088 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/cb2aaef89817825c9985e97d9e2d9155 2023-07-24 20:10:38,088 INFO [StoreOpener-7f7c014b739edee6fdbd3b8fa162c851-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7f7c014b739edee6fdbd3b8fa162c851 columnFamilyName f 2023-07-24 20:10:38,089 INFO [StoreOpener-7f7c014b739edee6fdbd3b8fa162c851-1] regionserver.HStore(310): Store=7f7c014b739edee6fdbd3b8fa162c851/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:38,090 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/7f7c014b739edee6fdbd3b8fa162c851 2023-07-24 20:10:38,093 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/7f7c014b739edee6fdbd3b8fa162c851 2023-07-24 20:10:38,099 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7f7c014b739edee6fdbd3b8fa162c851 2023-07-24 20:10:38,101 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cb2aaef89817825c9985e97d9e2d9155 2023-07-24 20:10:38,102 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cb2aaef89817825c9985e97d9e2d9155; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10157186080, jitterRate=-0.05403833091259003}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:38,102 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cb2aaef89817825c9985e97d9e2d9155: 2023-07-24 20:10:38,111 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155., pid=37, masterSystemTime=1690229438011 2023-07-24 20:10:38,111 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7f7c014b739edee6fdbd3b8fa162c851; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11702103840, jitterRate=0.08984334766864777}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:38,111 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7f7c014b739edee6fdbd3b8fa162c851: 2023-07-24 20:10:38,113 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851., pid=35, masterSystemTime=1690229438010 2023-07-24 20:10:38,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155. 2023-07-24 20:10:38,114 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155. 2023-07-24 20:10:38,116 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=cb2aaef89817825c9985e97d9e2d9155, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:38,116 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229438116"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229438116"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229438116"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229438116"}]},"ts":"1690229438116"} 2023-07-24 20:10:38,117 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851. 2023-07-24 20:10:38,118 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851. 2023-07-24 20:10:38,118 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65. 2023-07-24 20:10:38,118 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => aa4b78994a4b4203048134b0354cde65, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-24 20:10:38,118 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop aa4b78994a4b4203048134b0354cde65 2023-07-24 20:10:38,118 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:38,118 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for aa4b78994a4b4203048134b0354cde65 2023-07-24 20:10:38,119 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for aa4b78994a4b4203048134b0354cde65 2023-07-24 20:10:38,119 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=7f7c014b739edee6fdbd3b8fa162c851, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:38,121 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229438119"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229438119"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229438119"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229438119"}]},"ts":"1690229438119"} 2023-07-24 20:10:38,125 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=27 2023-07-24 20:10:38,126 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=27, state=SUCCESS; OpenRegionProcedure cb2aaef89817825c9985e97d9e2d9155, server=jenkins-hbase4.apache.org,36167,1690229431901 in 255 msec 2023-07-24 20:10:38,128 INFO [StoreOpener-aa4b78994a4b4203048134b0354cde65-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region aa4b78994a4b4203048134b0354cde65 2023-07-24 20:10:38,130 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=23 2023-07-24 20:10:38,130 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=23, state=SUCCESS; OpenRegionProcedure 7f7c014b739edee6fdbd3b8fa162c851, server=jenkins-hbase4.apache.org,36633,1690229435886 in 265 msec 2023-07-24 20:10:38,132 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb2aaef89817825c9985e97d9e2d9155, REOPEN/MOVE in 636 msec 2023-07-24 20:10:38,133 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7f7c014b739edee6fdbd3b8fa162c851, REOPEN/MOVE in 654 msec 2023-07-24 20:10:38,137 DEBUG [StoreOpener-aa4b78994a4b4203048134b0354cde65-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/aa4b78994a4b4203048134b0354cde65/f 2023-07-24 20:10:38,137 DEBUG [StoreOpener-aa4b78994a4b4203048134b0354cde65-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/aa4b78994a4b4203048134b0354cde65/f 2023-07-24 20:10:38,138 INFO [StoreOpener-aa4b78994a4b4203048134b0354cde65-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region aa4b78994a4b4203048134b0354cde65 columnFamilyName f 2023-07-24 20:10:38,138 INFO [StoreOpener-aa4b78994a4b4203048134b0354cde65-1] regionserver.HStore(310): Store=aa4b78994a4b4203048134b0354cde65/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:38,140 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/aa4b78994a4b4203048134b0354cde65 2023-07-24 20:10:38,141 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/aa4b78994a4b4203048134b0354cde65 2023-07-24 20:10:38,146 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for aa4b78994a4b4203048134b0354cde65 2023-07-24 20:10:38,147 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened aa4b78994a4b4203048134b0354cde65; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9531525440, jitterRate=-0.11230751872062683}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:38,147 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for aa4b78994a4b4203048134b0354cde65: 2023-07-24 20:10:38,148 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65., pid=33, masterSystemTime=1690229438010 2023-07-24 20:10:38,150 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65. 2023-07-24 20:10:38,150 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65. 2023-07-24 20:10:38,150 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=aa4b78994a4b4203048134b0354cde65, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:38,151 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229438150"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229438150"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229438150"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229438150"}]},"ts":"1690229438150"} 2023-07-24 20:10:38,161 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=24 2023-07-24 20:10:38,162 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=24, state=SUCCESS; OpenRegionProcedure aa4b78994a4b4203048134b0354cde65, server=jenkins-hbase4.apache.org,36633,1690229435886 in 298 msec 2023-07-24 20:10:38,167 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aa4b78994a4b4203048134b0354cde65, REOPEN/MOVE in 681 msec 2023-07-24 20:10:38,501 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure.ProcedureSyncWait(216): waitFor pid=23 2023-07-24 20:10:38,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_1805923026. 2023-07-24 20:10:38,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:38,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:38,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:38,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-24 20:10:38,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 20:10:38,511 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:38,518 INFO [Listener at localhost/39801] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-24 20:10:38,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-24 20:10:38,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=38, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 20:10:38,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-24 20:10:38,543 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229438543"}]},"ts":"1690229438543"} 2023-07-24 20:10:38,545 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-24 20:10:38,547 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-24 20:10:38,549 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7f7c014b739edee6fdbd3b8fa162c851, UNASSIGN}, {pid=40, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aa4b78994a4b4203048134b0354cde65, UNASSIGN}, {pid=41, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff1e4e1887bab7297600acc8dbc02745, UNASSIGN}, {pid=42, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb2aaef89817825c9985e97d9e2d9155, UNASSIGN}, {pid=43, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1aa93beecac72b56d3d112b7c2da42b4, UNASSIGN}] 2023-07-24 20:10:38,551 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=40, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aa4b78994a4b4203048134b0354cde65, UNASSIGN 2023-07-24 20:10:38,551 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=39, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7f7c014b739edee6fdbd3b8fa162c851, UNASSIGN 2023-07-24 20:10:38,552 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=41, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff1e4e1887bab7297600acc8dbc02745, UNASSIGN 2023-07-24 20:10:38,552 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=42, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb2aaef89817825c9985e97d9e2d9155, UNASSIGN 2023-07-24 20:10:38,553 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=40 updating hbase:meta row=aa4b78994a4b4203048134b0354cde65, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:38,553 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1aa93beecac72b56d3d112b7c2da42b4, UNASSIGN 2023-07-24 20:10:38,553 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=7f7c014b739edee6fdbd3b8fa162c851, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:38,553 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=41 updating hbase:meta row=ff1e4e1887bab7297600acc8dbc02745, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:38,553 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229438553"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229438553"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229438553"}]},"ts":"1690229438553"} 2023-07-24 20:10:38,553 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229438553"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229438553"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229438553"}]},"ts":"1690229438553"} 2023-07-24 20:10:38,553 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229438553"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229438553"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229438553"}]},"ts":"1690229438553"} 2023-07-24 20:10:38,553 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=cb2aaef89817825c9985e97d9e2d9155, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:38,553 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229438553"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229438553"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229438553"}]},"ts":"1690229438553"} 2023-07-24 20:10:38,555 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=1aa93beecac72b56d3d112b7c2da42b4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:38,555 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229438555"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229438555"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229438555"}]},"ts":"1690229438555"} 2023-07-24 20:10:38,555 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=44, ppid=40, state=RUNNABLE; CloseRegionProcedure aa4b78994a4b4203048134b0354cde65, server=jenkins-hbase4.apache.org,36633,1690229435886}] 2023-07-24 20:10:38,557 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=41, state=RUNNABLE; CloseRegionProcedure ff1e4e1887bab7297600acc8dbc02745, server=jenkins-hbase4.apache.org,36167,1690229431901}] 2023-07-24 20:10:38,557 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=46, ppid=39, state=RUNNABLE; CloseRegionProcedure 7f7c014b739edee6fdbd3b8fa162c851, server=jenkins-hbase4.apache.org,36633,1690229435886}] 2023-07-24 20:10:38,559 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=42, state=RUNNABLE; CloseRegionProcedure cb2aaef89817825c9985e97d9e2d9155, server=jenkins-hbase4.apache.org,36167,1690229431901}] 2023-07-24 20:10:38,561 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=43, state=RUNNABLE; CloseRegionProcedure 1aa93beecac72b56d3d112b7c2da42b4, server=jenkins-hbase4.apache.org,36633,1690229435886}] 2023-07-24 20:10:38,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-24 20:10:38,709 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close aa4b78994a4b4203048134b0354cde65 2023-07-24 20:10:38,711 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing aa4b78994a4b4203048134b0354cde65, disabling compactions & flushes 2023-07-24 20:10:38,711 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65. 2023-07-24 20:10:38,711 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65. 2023-07-24 20:10:38,711 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65. after waiting 0 ms 2023-07-24 20:10:38,711 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65. 2023-07-24 20:10:38,711 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ff1e4e1887bab7297600acc8dbc02745 2023-07-24 20:10:38,712 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ff1e4e1887bab7297600acc8dbc02745, disabling compactions & flushes 2023-07-24 20:10:38,712 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745. 2023-07-24 20:10:38,712 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745. 2023-07-24 20:10:38,712 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745. after waiting 0 ms 2023-07-24 20:10:38,712 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745. 2023-07-24 20:10:38,719 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/ff1e4e1887bab7297600acc8dbc02745/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 20:10:38,720 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745. 2023-07-24 20:10:38,720 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ff1e4e1887bab7297600acc8dbc02745: 2023-07-24 20:10:38,720 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/aa4b78994a4b4203048134b0354cde65/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 20:10:38,721 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65. 2023-07-24 20:10:38,721 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for aa4b78994a4b4203048134b0354cde65: 2023-07-24 20:10:38,723 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ff1e4e1887bab7297600acc8dbc02745 2023-07-24 20:10:38,723 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cb2aaef89817825c9985e97d9e2d9155 2023-07-24 20:10:38,724 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cb2aaef89817825c9985e97d9e2d9155, disabling compactions & flushes 2023-07-24 20:10:38,724 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155. 2023-07-24 20:10:38,724 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155. 2023-07-24 20:10:38,724 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155. after waiting 0 ms 2023-07-24 20:10:38,724 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155. 2023-07-24 20:10:38,725 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=41 updating hbase:meta row=ff1e4e1887bab7297600acc8dbc02745, regionState=CLOSED 2023-07-24 20:10:38,726 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229438725"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229438725"}]},"ts":"1690229438725"} 2023-07-24 20:10:38,727 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed aa4b78994a4b4203048134b0354cde65 2023-07-24 20:10:38,727 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7f7c014b739edee6fdbd3b8fa162c851 2023-07-24 20:10:38,727 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7f7c014b739edee6fdbd3b8fa162c851, disabling compactions & flushes 2023-07-24 20:10:38,727 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851. 2023-07-24 20:10:38,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851. 2023-07-24 20:10:38,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851. after waiting 0 ms 2023-07-24 20:10:38,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851. 2023-07-24 20:10:38,729 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=40 updating hbase:meta row=aa4b78994a4b4203048134b0354cde65, regionState=CLOSED 2023-07-24 20:10:38,730 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229438729"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229438729"}]},"ts":"1690229438729"} 2023-07-24 20:10:38,739 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/cb2aaef89817825c9985e97d9e2d9155/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 20:10:38,739 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=41 2023-07-24 20:10:38,739 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=41, state=SUCCESS; CloseRegionProcedure ff1e4e1887bab7297600acc8dbc02745, server=jenkins-hbase4.apache.org,36167,1690229431901 in 172 msec 2023-07-24 20:10:38,740 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155. 2023-07-24 20:10:38,740 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cb2aaef89817825c9985e97d9e2d9155: 2023-07-24 20:10:38,741 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=40 2023-07-24 20:10:38,741 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=40, state=SUCCESS; CloseRegionProcedure aa4b78994a4b4203048134b0354cde65, server=jenkins-hbase4.apache.org,36633,1690229435886 in 181 msec 2023-07-24 20:10:38,742 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff1e4e1887bab7297600acc8dbc02745, UNASSIGN in 190 msec 2023-07-24 20:10:38,742 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cb2aaef89817825c9985e97d9e2d9155 2023-07-24 20:10:38,743 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/7f7c014b739edee6fdbd3b8fa162c851/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 20:10:38,743 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aa4b78994a4b4203048134b0354cde65, UNASSIGN in 192 msec 2023-07-24 20:10:38,743 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=cb2aaef89817825c9985e97d9e2d9155, regionState=CLOSED 2023-07-24 20:10:38,744 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229438743"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229438743"}]},"ts":"1690229438743"} 2023-07-24 20:10:38,744 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851. 2023-07-24 20:10:38,744 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7f7c014b739edee6fdbd3b8fa162c851: 2023-07-24 20:10:38,746 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7f7c014b739edee6fdbd3b8fa162c851 2023-07-24 20:10:38,746 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1aa93beecac72b56d3d112b7c2da42b4 2023-07-24 20:10:38,747 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1aa93beecac72b56d3d112b7c2da42b4, disabling compactions & flushes 2023-07-24 20:10:38,747 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4. 2023-07-24 20:10:38,747 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4. 2023-07-24 20:10:38,747 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4. after waiting 0 ms 2023-07-24 20:10:38,747 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4. 2023-07-24 20:10:38,748 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=7f7c014b739edee6fdbd3b8fa162c851, regionState=CLOSED 2023-07-24 20:10:38,748 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229438748"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229438748"}]},"ts":"1690229438748"} 2023-07-24 20:10:38,750 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=42 2023-07-24 20:10:38,750 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=42, state=SUCCESS; CloseRegionProcedure cb2aaef89817825c9985e97d9e2d9155, server=jenkins-hbase4.apache.org,36167,1690229431901 in 187 msec 2023-07-24 20:10:38,752 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb2aaef89817825c9985e97d9e2d9155, UNASSIGN in 201 msec 2023-07-24 20:10:38,754 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=39 2023-07-24 20:10:38,754 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=39, state=SUCCESS; CloseRegionProcedure 7f7c014b739edee6fdbd3b8fa162c851, server=jenkins-hbase4.apache.org,36633,1690229435886 in 193 msec 2023-07-24 20:10:38,755 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/1aa93beecac72b56d3d112b7c2da42b4/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 20:10:38,756 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4. 2023-07-24 20:10:38,756 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1aa93beecac72b56d3d112b7c2da42b4: 2023-07-24 20:10:38,767 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1aa93beecac72b56d3d112b7c2da42b4 2023-07-24 20:10:38,768 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7f7c014b739edee6fdbd3b8fa162c851, UNASSIGN in 205 msec 2023-07-24 20:10:38,768 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=1aa93beecac72b56d3d112b7c2da42b4, regionState=CLOSED 2023-07-24 20:10:38,768 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229438768"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229438768"}]},"ts":"1690229438768"} 2023-07-24 20:10:38,773 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=43 2023-07-24 20:10:38,773 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=43, state=SUCCESS; CloseRegionProcedure 1aa93beecac72b56d3d112b7c2da42b4, server=jenkins-hbase4.apache.org,36633,1690229435886 in 209 msec 2023-07-24 20:10:38,775 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=38 2023-07-24 20:10:38,776 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1aa93beecac72b56d3d112b7c2da42b4, UNASSIGN in 224 msec 2023-07-24 20:10:38,776 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229438776"}]},"ts":"1690229438776"} 2023-07-24 20:10:38,778 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-24 20:10:38,780 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-24 20:10:38,784 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=38, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 257 msec 2023-07-24 20:10:38,843 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-24 20:10:38,843 INFO [Listener at localhost/39801] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 38 completed 2023-07-24 20:10:38,845 INFO [Listener at localhost/39801] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-24 20:10:38,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-24 20:10:38,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=49, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-24 20:10:38,862 DEBUG [PEWorker-4] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-24 20:10:38,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-24 20:10:38,877 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aa4b78994a4b4203048134b0354cde65 2023-07-24 20:10:38,877 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cb2aaef89817825c9985e97d9e2d9155 2023-07-24 20:10:38,877 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1aa93beecac72b56d3d112b7c2da42b4 2023-07-24 20:10:38,877 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ff1e4e1887bab7297600acc8dbc02745 2023-07-24 20:10:38,877 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7f7c014b739edee6fdbd3b8fa162c851 2023-07-24 20:10:38,883 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cb2aaef89817825c9985e97d9e2d9155/f, FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cb2aaef89817825c9985e97d9e2d9155/recovered.edits] 2023-07-24 20:10:38,883 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aa4b78994a4b4203048134b0354cde65/f, FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aa4b78994a4b4203048134b0354cde65/recovered.edits] 2023-07-24 20:10:38,886 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ff1e4e1887bab7297600acc8dbc02745/f, FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ff1e4e1887bab7297600acc8dbc02745/recovered.edits] 2023-07-24 20:10:38,887 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1aa93beecac72b56d3d112b7c2da42b4/f, FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1aa93beecac72b56d3d112b7c2da42b4/recovered.edits] 2023-07-24 20:10:38,887 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7f7c014b739edee6fdbd3b8fa162c851/f, FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7f7c014b739edee6fdbd3b8fa162c851/recovered.edits] 2023-07-24 20:10:38,902 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aa4b78994a4b4203048134b0354cde65/recovered.edits/7.seqid to hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/archive/data/default/Group_testTableMoveTruncateAndDrop/aa4b78994a4b4203048134b0354cde65/recovered.edits/7.seqid 2023-07-24 20:10:38,903 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cb2aaef89817825c9985e97d9e2d9155/recovered.edits/7.seqid to hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/archive/data/default/Group_testTableMoveTruncateAndDrop/cb2aaef89817825c9985e97d9e2d9155/recovered.edits/7.seqid 2023-07-24 20:10:38,904 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aa4b78994a4b4203048134b0354cde65 2023-07-24 20:10:38,905 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ff1e4e1887bab7297600acc8dbc02745/recovered.edits/7.seqid to hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/archive/data/default/Group_testTableMoveTruncateAndDrop/ff1e4e1887bab7297600acc8dbc02745/recovered.edits/7.seqid 2023-07-24 20:10:38,905 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1aa93beecac72b56d3d112b7c2da42b4/recovered.edits/7.seqid to hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/archive/data/default/Group_testTableMoveTruncateAndDrop/1aa93beecac72b56d3d112b7c2da42b4/recovered.edits/7.seqid 2023-07-24 20:10:38,906 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cb2aaef89817825c9985e97d9e2d9155 2023-07-24 20:10:38,906 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ff1e4e1887bab7297600acc8dbc02745 2023-07-24 20:10:38,906 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1aa93beecac72b56d3d112b7c2da42b4 2023-07-24 20:10:38,908 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7f7c014b739edee6fdbd3b8fa162c851/recovered.edits/7.seqid to hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/archive/data/default/Group_testTableMoveTruncateAndDrop/7f7c014b739edee6fdbd3b8fa162c851/recovered.edits/7.seqid 2023-07-24 20:10:38,909 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7f7c014b739edee6fdbd3b8fa162c851 2023-07-24 20:10:38,909 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-24 20:10:38,941 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-24 20:10:38,946 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-24 20:10:38,947 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-24 20:10:38,947 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690229438947"}]},"ts":"9223372036854775807"} 2023-07-24 20:10:38,947 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690229438947"}]},"ts":"9223372036854775807"} 2023-07-24 20:10:38,947 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690229438947"}]},"ts":"9223372036854775807"} 2023-07-24 20:10:38,947 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690229438947"}]},"ts":"9223372036854775807"} 2023-07-24 20:10:38,947 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690229438947"}]},"ts":"9223372036854775807"} 2023-07-24 20:10:38,952 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-24 20:10:38,952 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 7f7c014b739edee6fdbd3b8fa162c851, NAME => 'Group_testTableMoveTruncateAndDrop,,1690229436226.7f7c014b739edee6fdbd3b8fa162c851.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => aa4b78994a4b4203048134b0354cde65, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690229436226.aa4b78994a4b4203048134b0354cde65.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => ff1e4e1887bab7297600acc8dbc02745, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229436226.ff1e4e1887bab7297600acc8dbc02745.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => cb2aaef89817825c9985e97d9e2d9155, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229436226.cb2aaef89817825c9985e97d9e2d9155.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 1aa93beecac72b56d3d112b7c2da42b4, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690229436226.1aa93beecac72b56d3d112b7c2da42b4.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-24 20:10:38,953 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-24 20:10:38,953 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690229438953"}]},"ts":"9223372036854775807"} 2023-07-24 20:10:38,956 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-24 20:10:38,964 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cec4cc6e0d97c4a8a043d6b37f3b5fd8 2023-07-24 20:10:38,964 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9ccc50987f492472e6be13902f7093bc 2023-07-24 20:10:38,965 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-24 20:10:38,964 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9b09f16e4ecdc5e924eeacf3f3020f66 2023-07-24 20:10:38,964 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e521488648899ab16640bc437c7f4bdb 2023-07-24 20:10:38,964 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2c148c579b33b0f1a5ffbbaead601e82 2023-07-24 20:10:38,966 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9ccc50987f492472e6be13902f7093bc empty. 2023-07-24 20:10:38,966 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e521488648899ab16640bc437c7f4bdb empty. 2023-07-24 20:10:38,966 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2c148c579b33b0f1a5ffbbaead601e82 empty. 2023-07-24 20:10:38,966 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9b09f16e4ecdc5e924eeacf3f3020f66 empty. 2023-07-24 20:10:38,967 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cec4cc6e0d97c4a8a043d6b37f3b5fd8 empty. 2023-07-24 20:10:38,967 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e521488648899ab16640bc437c7f4bdb 2023-07-24 20:10:38,967 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9ccc50987f492472e6be13902f7093bc 2023-07-24 20:10:38,967 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2c148c579b33b0f1a5ffbbaead601e82 2023-07-24 20:10:38,968 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9b09f16e4ecdc5e924eeacf3f3020f66 2023-07-24 20:10:38,968 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cec4cc6e0d97c4a8a043d6b37f3b5fd8 2023-07-24 20:10:38,968 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-24 20:10:39,004 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-24 20:10:39,006 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => e521488648899ab16640bc437c7f4bdb, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229438911.e521488648899ab16640bc437c7f4bdb.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp 2023-07-24 20:10:39,007 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 2c148c579b33b0f1a5ffbbaead601e82, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690229438911.2c148c579b33b0f1a5ffbbaead601e82.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp 2023-07-24 20:10:39,007 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => cec4cc6e0d97c4a8a043d6b37f3b5fd8, NAME => 'Group_testTableMoveTruncateAndDrop,,1690229438911.cec4cc6e0d97c4a8a043d6b37f3b5fd8.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp 2023-07-24 20:10:39,055 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690229438911.2c148c579b33b0f1a5ffbbaead601e82.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:39,055 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229438911.e521488648899ab16640bc437c7f4bdb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:39,055 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 2c148c579b33b0f1a5ffbbaead601e82, disabling compactions & flushes 2023-07-24 20:10:39,055 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing e521488648899ab16640bc437c7f4bdb, disabling compactions & flushes 2023-07-24 20:10:39,055 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690229438911.2c148c579b33b0f1a5ffbbaead601e82. 2023-07-24 20:10:39,055 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229438911.e521488648899ab16640bc437c7f4bdb. 2023-07-24 20:10:39,055 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690229438911.2c148c579b33b0f1a5ffbbaead601e82. 2023-07-24 20:10:39,055 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229438911.e521488648899ab16640bc437c7f4bdb. 2023-07-24 20:10:39,055 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690229438911.2c148c579b33b0f1a5ffbbaead601e82. after waiting 0 ms 2023-07-24 20:10:39,055 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229438911.e521488648899ab16640bc437c7f4bdb. after waiting 0 ms 2023-07-24 20:10:39,056 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690229438911.2c148c579b33b0f1a5ffbbaead601e82. 2023-07-24 20:10:39,056 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229438911.e521488648899ab16640bc437c7f4bdb. 2023-07-24 20:10:39,056 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690229438911.2c148c579b33b0f1a5ffbbaead601e82. 2023-07-24 20:10:39,056 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229438911.e521488648899ab16640bc437c7f4bdb. 2023-07-24 20:10:39,056 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for e521488648899ab16640bc437c7f4bdb: 2023-07-24 20:10:39,056 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 2c148c579b33b0f1a5ffbbaead601e82: 2023-07-24 20:10:39,056 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 9b09f16e4ecdc5e924eeacf3f3020f66, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229438911.9b09f16e4ecdc5e924eeacf3f3020f66.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp 2023-07-24 20:10:39,057 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 9ccc50987f492472e6be13902f7093bc, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690229438911.9ccc50987f492472e6be13902f7093bc.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp 2023-07-24 20:10:39,094 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690229438911.9ccc50987f492472e6be13902f7093bc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:39,094 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 9ccc50987f492472e6be13902f7093bc, disabling compactions & flushes 2023-07-24 20:10:39,095 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690229438911.9ccc50987f492472e6be13902f7093bc. 2023-07-24 20:10:39,095 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690229438911.9ccc50987f492472e6be13902f7093bc. 2023-07-24 20:10:39,095 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690229438911.9ccc50987f492472e6be13902f7093bc. after waiting 0 ms 2023-07-24 20:10:39,095 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690229438911.9ccc50987f492472e6be13902f7093bc. 2023-07-24 20:10:39,095 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690229438911.9ccc50987f492472e6be13902f7093bc. 2023-07-24 20:10:39,095 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 9ccc50987f492472e6be13902f7093bc: 2023-07-24 20:10:39,097 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229438911.9b09f16e4ecdc5e924eeacf3f3020f66.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:39,097 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 9b09f16e4ecdc5e924eeacf3f3020f66, disabling compactions & flushes 2023-07-24 20:10:39,097 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229438911.9b09f16e4ecdc5e924eeacf3f3020f66. 2023-07-24 20:10:39,097 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229438911.9b09f16e4ecdc5e924eeacf3f3020f66. 2023-07-24 20:10:39,097 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229438911.9b09f16e4ecdc5e924eeacf3f3020f66. after waiting 0 ms 2023-07-24 20:10:39,097 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229438911.9b09f16e4ecdc5e924eeacf3f3020f66. 2023-07-24 20:10:39,097 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229438911.9b09f16e4ecdc5e924eeacf3f3020f66. 2023-07-24 20:10:39,097 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 9b09f16e4ecdc5e924eeacf3f3020f66: 2023-07-24 20:10:39,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-24 20:10:39,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-24 20:10:39,477 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690229438911.cec4cc6e0d97c4a8a043d6b37f3b5fd8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:39,477 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing cec4cc6e0d97c4a8a043d6b37f3b5fd8, disabling compactions & flushes 2023-07-24 20:10:39,477 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690229438911.cec4cc6e0d97c4a8a043d6b37f3b5fd8. 2023-07-24 20:10:39,477 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690229438911.cec4cc6e0d97c4a8a043d6b37f3b5fd8. 2023-07-24 20:10:39,477 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690229438911.cec4cc6e0d97c4a8a043d6b37f3b5fd8. after waiting 0 ms 2023-07-24 20:10:39,477 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690229438911.cec4cc6e0d97c4a8a043d6b37f3b5fd8. 2023-07-24 20:10:39,477 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690229438911.cec4cc6e0d97c4a8a043d6b37f3b5fd8. 2023-07-24 20:10:39,477 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for cec4cc6e0d97c4a8a043d6b37f3b5fd8: 2023-07-24 20:10:39,484 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690229438911.e521488648899ab16640bc437c7f4bdb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229439484"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229439484"}]},"ts":"1690229439484"} 2023-07-24 20:10:39,485 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690229438911.2c148c579b33b0f1a5ffbbaead601e82.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229439484"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229439484"}]},"ts":"1690229439484"} 2023-07-24 20:10:39,485 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690229438911.9ccc50987f492472e6be13902f7093bc.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229439484"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229439484"}]},"ts":"1690229439484"} 2023-07-24 20:10:39,485 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690229438911.9b09f16e4ecdc5e924eeacf3f3020f66.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229439484"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229439484"}]},"ts":"1690229439484"} 2023-07-24 20:10:39,485 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690229438911.cec4cc6e0d97c4a8a043d6b37f3b5fd8.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229439484"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229439484"}]},"ts":"1690229439484"} 2023-07-24 20:10:39,488 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-24 20:10:39,489 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229439489"}]},"ts":"1690229439489"} 2023-07-24 20:10:39,491 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-24 20:10:39,497 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 20:10:39,497 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 20:10:39,497 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 20:10:39,497 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 20:10:39,501 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cec4cc6e0d97c4a8a043d6b37f3b5fd8, ASSIGN}, {pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2c148c579b33b0f1a5ffbbaead601e82, ASSIGN}, {pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e521488648899ab16640bc437c7f4bdb, ASSIGN}, {pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9b09f16e4ecdc5e924eeacf3f3020f66, ASSIGN}, {pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9ccc50987f492472e6be13902f7093bc, ASSIGN}] 2023-07-24 20:10:39,503 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2c148c579b33b0f1a5ffbbaead601e82, ASSIGN 2023-07-24 20:10:39,503 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e521488648899ab16640bc437c7f4bdb, ASSIGN 2023-07-24 20:10:39,503 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cec4cc6e0d97c4a8a043d6b37f3b5fd8, ASSIGN 2023-07-24 20:10:39,504 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9ccc50987f492472e6be13902f7093bc, ASSIGN 2023-07-24 20:10:39,504 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9b09f16e4ecdc5e924eeacf3f3020f66, ASSIGN 2023-07-24 20:10:39,504 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2c148c579b33b0f1a5ffbbaead601e82, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36167,1690229431901; forceNewPlan=false, retain=false 2023-07-24 20:10:39,507 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e521488648899ab16640bc437c7f4bdb, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36633,1690229435886; forceNewPlan=false, retain=false 2023-07-24 20:10:39,507 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9ccc50987f492472e6be13902f7093bc, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36633,1690229435886; forceNewPlan=false, retain=false 2023-07-24 20:10:39,507 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cec4cc6e0d97c4a8a043d6b37f3b5fd8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36633,1690229435886; forceNewPlan=false, retain=false 2023-07-24 20:10:39,507 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9b09f16e4ecdc5e924eeacf3f3020f66, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36167,1690229431901; forceNewPlan=false, retain=false 2023-07-24 20:10:39,655 INFO [jenkins-hbase4:43005] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-24 20:10:39,662 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=9ccc50987f492472e6be13902f7093bc, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:39,662 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690229438911.9ccc50987f492472e6be13902f7093bc.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229439662"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229439662"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229439662"}]},"ts":"1690229439662"} 2023-07-24 20:10:39,663 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=e521488648899ab16640bc437c7f4bdb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:39,663 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=cec4cc6e0d97c4a8a043d6b37f3b5fd8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:39,663 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690229438911.e521488648899ab16640bc437c7f4bdb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229439663"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229439663"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229439663"}]},"ts":"1690229439663"} 2023-07-24 20:10:39,663 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690229438911.cec4cc6e0d97c4a8a043d6b37f3b5fd8.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229439663"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229439663"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229439663"}]},"ts":"1690229439663"} 2023-07-24 20:10:39,663 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=2c148c579b33b0f1a5ffbbaead601e82, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:39,664 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690229438911.2c148c579b33b0f1a5ffbbaead601e82.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229439663"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229439663"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229439663"}]},"ts":"1690229439663"} 2023-07-24 20:10:39,663 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=9b09f16e4ecdc5e924eeacf3f3020f66, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:39,664 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690229438911.9b09f16e4ecdc5e924eeacf3f3020f66.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229439663"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229439663"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229439663"}]},"ts":"1690229439663"} 2023-07-24 20:10:39,665 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=55, ppid=54, state=RUNNABLE; OpenRegionProcedure 9ccc50987f492472e6be13902f7093bc, server=jenkins-hbase4.apache.org,36633,1690229435886}] 2023-07-24 20:10:39,667 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=52, state=RUNNABLE; OpenRegionProcedure e521488648899ab16640bc437c7f4bdb, server=jenkins-hbase4.apache.org,36633,1690229435886}] 2023-07-24 20:10:39,675 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=57, ppid=50, state=RUNNABLE; OpenRegionProcedure cec4cc6e0d97c4a8a043d6b37f3b5fd8, server=jenkins-hbase4.apache.org,36633,1690229435886}] 2023-07-24 20:10:39,687 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=51, state=RUNNABLE; OpenRegionProcedure 2c148c579b33b0f1a5ffbbaead601e82, server=jenkins-hbase4.apache.org,36167,1690229431901}] 2023-07-24 20:10:39,701 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=53, state=RUNNABLE; OpenRegionProcedure 9b09f16e4ecdc5e924eeacf3f3020f66, server=jenkins-hbase4.apache.org,36167,1690229431901}] 2023-07-24 20:10:39,831 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690229438911.cec4cc6e0d97c4a8a043d6b37f3b5fd8. 2023-07-24 20:10:39,832 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cec4cc6e0d97c4a8a043d6b37f3b5fd8, NAME => 'Group_testTableMoveTruncateAndDrop,,1690229438911.cec4cc6e0d97c4a8a043d6b37f3b5fd8.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-24 20:10:39,832 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop cec4cc6e0d97c4a8a043d6b37f3b5fd8 2023-07-24 20:10:39,832 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690229438911.cec4cc6e0d97c4a8a043d6b37f3b5fd8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:39,832 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cec4cc6e0d97c4a8a043d6b37f3b5fd8 2023-07-24 20:10:39,832 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cec4cc6e0d97c4a8a043d6b37f3b5fd8 2023-07-24 20:10:39,841 INFO [StoreOpener-cec4cc6e0d97c4a8a043d6b37f3b5fd8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cec4cc6e0d97c4a8a043d6b37f3b5fd8 2023-07-24 20:10:39,845 DEBUG [StoreOpener-cec4cc6e0d97c4a8a043d6b37f3b5fd8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/cec4cc6e0d97c4a8a043d6b37f3b5fd8/f 2023-07-24 20:10:39,845 DEBUG [StoreOpener-cec4cc6e0d97c4a8a043d6b37f3b5fd8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/cec4cc6e0d97c4a8a043d6b37f3b5fd8/f 2023-07-24 20:10:39,845 INFO [StoreOpener-cec4cc6e0d97c4a8a043d6b37f3b5fd8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cec4cc6e0d97c4a8a043d6b37f3b5fd8 columnFamilyName f 2023-07-24 20:10:39,847 INFO [StoreOpener-cec4cc6e0d97c4a8a043d6b37f3b5fd8-1] regionserver.HStore(310): Store=cec4cc6e0d97c4a8a043d6b37f3b5fd8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:39,848 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/cec4cc6e0d97c4a8a043d6b37f3b5fd8 2023-07-24 20:10:39,848 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/cec4cc6e0d97c4a8a043d6b37f3b5fd8 2023-07-24 20:10:39,854 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cec4cc6e0d97c4a8a043d6b37f3b5fd8 2023-07-24 20:10:39,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/cec4cc6e0d97c4a8a043d6b37f3b5fd8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:10:39,861 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cec4cc6e0d97c4a8a043d6b37f3b5fd8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9795831840, jitterRate=-0.08769206702709198}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:39,861 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cec4cc6e0d97c4a8a043d6b37f3b5fd8: 2023-07-24 20:10:39,863 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690229438911.cec4cc6e0d97c4a8a043d6b37f3b5fd8., pid=57, masterSystemTime=1690229439825 2023-07-24 20:10:39,865 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690229438911.cec4cc6e0d97c4a8a043d6b37f3b5fd8. 2023-07-24 20:10:39,865 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690229438911.cec4cc6e0d97c4a8a043d6b37f3b5fd8. 2023-07-24 20:10:39,865 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690229438911.9ccc50987f492472e6be13902f7093bc. 2023-07-24 20:10:39,865 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9ccc50987f492472e6be13902f7093bc, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690229438911.9ccc50987f492472e6be13902f7093bc.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-24 20:10:39,866 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=cec4cc6e0d97c4a8a043d6b37f3b5fd8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:39,866 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690229438911.cec4cc6e0d97c4a8a043d6b37f3b5fd8.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229439865"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229439865"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229439865"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229439865"}]},"ts":"1690229439865"} 2023-07-24 20:10:39,866 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 9ccc50987f492472e6be13902f7093bc 2023-07-24 20:10:39,866 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690229438911.9ccc50987f492472e6be13902f7093bc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:39,866 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9ccc50987f492472e6be13902f7093bc 2023-07-24 20:10:39,866 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9ccc50987f492472e6be13902f7093bc 2023-07-24 20:10:39,868 INFO [StoreOpener-9ccc50987f492472e6be13902f7093bc-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9ccc50987f492472e6be13902f7093bc 2023-07-24 20:10:39,870 DEBUG [StoreOpener-9ccc50987f492472e6be13902f7093bc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/9ccc50987f492472e6be13902f7093bc/f 2023-07-24 20:10:39,870 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=50 2023-07-24 20:10:39,870 DEBUG [StoreOpener-9ccc50987f492472e6be13902f7093bc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/9ccc50987f492472e6be13902f7093bc/f 2023-07-24 20:10:39,871 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=50, state=SUCCESS; OpenRegionProcedure cec4cc6e0d97c4a8a043d6b37f3b5fd8, server=jenkins-hbase4.apache.org,36633,1690229435886 in 193 msec 2023-07-24 20:10:39,871 INFO [StoreOpener-9ccc50987f492472e6be13902f7093bc-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9ccc50987f492472e6be13902f7093bc columnFamilyName f 2023-07-24 20:10:39,873 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cec4cc6e0d97c4a8a043d6b37f3b5fd8, ASSIGN in 373 msec 2023-07-24 20:10:39,873 INFO [StoreOpener-9ccc50987f492472e6be13902f7093bc-1] regionserver.HStore(310): Store=9ccc50987f492472e6be13902f7093bc/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:39,875 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/9ccc50987f492472e6be13902f7093bc 2023-07-24 20:10:39,875 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229438911.9b09f16e4ecdc5e924eeacf3f3020f66. 2023-07-24 20:10:39,875 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9b09f16e4ecdc5e924eeacf3f3020f66, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229438911.9b09f16e4ecdc5e924eeacf3f3020f66.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-24 20:10:39,875 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 9b09f16e4ecdc5e924eeacf3f3020f66 2023-07-24 20:10:39,875 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229438911.9b09f16e4ecdc5e924eeacf3f3020f66.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:39,876 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9b09f16e4ecdc5e924eeacf3f3020f66 2023-07-24 20:10:39,876 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9b09f16e4ecdc5e924eeacf3f3020f66 2023-07-24 20:10:39,876 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/9ccc50987f492472e6be13902f7093bc 2023-07-24 20:10:39,877 INFO [StoreOpener-9b09f16e4ecdc5e924eeacf3f3020f66-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9b09f16e4ecdc5e924eeacf3f3020f66 2023-07-24 20:10:39,882 DEBUG [StoreOpener-9b09f16e4ecdc5e924eeacf3f3020f66-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/9b09f16e4ecdc5e924eeacf3f3020f66/f 2023-07-24 20:10:39,882 DEBUG [StoreOpener-9b09f16e4ecdc5e924eeacf3f3020f66-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/9b09f16e4ecdc5e924eeacf3f3020f66/f 2023-07-24 20:10:39,882 INFO [StoreOpener-9b09f16e4ecdc5e924eeacf3f3020f66-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9b09f16e4ecdc5e924eeacf3f3020f66 columnFamilyName f 2023-07-24 20:10:39,883 INFO [StoreOpener-9b09f16e4ecdc5e924eeacf3f3020f66-1] regionserver.HStore(310): Store=9b09f16e4ecdc5e924eeacf3f3020f66/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:39,884 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/9b09f16e4ecdc5e924eeacf3f3020f66 2023-07-24 20:10:39,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9ccc50987f492472e6be13902f7093bc 2023-07-24 20:10:39,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/9b09f16e4ecdc5e924eeacf3f3020f66 2023-07-24 20:10:39,890 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9b09f16e4ecdc5e924eeacf3f3020f66 2023-07-24 20:10:39,900 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/9ccc50987f492472e6be13902f7093bc/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:10:39,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/9b09f16e4ecdc5e924eeacf3f3020f66/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:10:39,901 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9ccc50987f492472e6be13902f7093bc; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9439432000, jitterRate=-0.12088438868522644}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:39,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9ccc50987f492472e6be13902f7093bc: 2023-07-24 20:10:39,902 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9b09f16e4ecdc5e924eeacf3f3020f66; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11653446720, jitterRate=0.08531180024147034}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:39,902 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9b09f16e4ecdc5e924eeacf3f3020f66: 2023-07-24 20:10:39,903 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690229438911.9ccc50987f492472e6be13902f7093bc., pid=55, masterSystemTime=1690229439825 2023-07-24 20:10:39,905 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229438911.9b09f16e4ecdc5e924eeacf3f3020f66., pid=59, masterSystemTime=1690229439869 2023-07-24 20:10:39,905 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690229438911.9ccc50987f492472e6be13902f7093bc. 2023-07-24 20:10:39,905 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690229438911.9ccc50987f492472e6be13902f7093bc. 2023-07-24 20:10:39,906 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229438911.e521488648899ab16640bc437c7f4bdb. 2023-07-24 20:10:39,906 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e521488648899ab16640bc437c7f4bdb, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229438911.e521488648899ab16640bc437c7f4bdb.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-24 20:10:39,906 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e521488648899ab16640bc437c7f4bdb 2023-07-24 20:10:39,906 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=9ccc50987f492472e6be13902f7093bc, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:39,906 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229438911.e521488648899ab16640bc437c7f4bdb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:39,906 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e521488648899ab16640bc437c7f4bdb 2023-07-24 20:10:39,906 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690229438911.9ccc50987f492472e6be13902f7093bc.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229439906"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229439906"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229439906"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229439906"}]},"ts":"1690229439906"} 2023-07-24 20:10:39,907 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e521488648899ab16640bc437c7f4bdb 2023-07-24 20:10:39,908 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229438911.9b09f16e4ecdc5e924eeacf3f3020f66. 2023-07-24 20:10:39,908 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229438911.9b09f16e4ecdc5e924eeacf3f3020f66. 2023-07-24 20:10:39,908 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690229438911.2c148c579b33b0f1a5ffbbaead601e82. 2023-07-24 20:10:39,908 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2c148c579b33b0f1a5ffbbaead601e82, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690229438911.2c148c579b33b0f1a5ffbbaead601e82.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-24 20:10:39,908 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 2c148c579b33b0f1a5ffbbaead601e82 2023-07-24 20:10:39,908 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690229438911.2c148c579b33b0f1a5ffbbaead601e82.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:39,909 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=9b09f16e4ecdc5e924eeacf3f3020f66, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:39,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2c148c579b33b0f1a5ffbbaead601e82 2023-07-24 20:10:39,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2c148c579b33b0f1a5ffbbaead601e82 2023-07-24 20:10:39,909 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690229438911.9b09f16e4ecdc5e924eeacf3f3020f66.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229439908"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229439908"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229439908"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229439908"}]},"ts":"1690229439908"} 2023-07-24 20:10:39,917 INFO [StoreOpener-2c148c579b33b0f1a5ffbbaead601e82-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2c148c579b33b0f1a5ffbbaead601e82 2023-07-24 20:10:39,919 DEBUG [StoreOpener-2c148c579b33b0f1a5ffbbaead601e82-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/2c148c579b33b0f1a5ffbbaead601e82/f 2023-07-24 20:10:39,920 DEBUG [StoreOpener-2c148c579b33b0f1a5ffbbaead601e82-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/2c148c579b33b0f1a5ffbbaead601e82/f 2023-07-24 20:10:39,920 INFO [StoreOpener-2c148c579b33b0f1a5ffbbaead601e82-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2c148c579b33b0f1a5ffbbaead601e82 columnFamilyName f 2023-07-24 20:10:39,925 INFO [StoreOpener-e521488648899ab16640bc437c7f4bdb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e521488648899ab16640bc437c7f4bdb 2023-07-24 20:10:39,926 INFO [StoreOpener-2c148c579b33b0f1a5ffbbaead601e82-1] regionserver.HStore(310): Store=2c148c579b33b0f1a5ffbbaead601e82/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:39,926 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=55, resume processing ppid=54 2023-07-24 20:10:39,927 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=54, state=SUCCESS; OpenRegionProcedure 9ccc50987f492472e6be13902f7093bc, server=jenkins-hbase4.apache.org,36633,1690229435886 in 245 msec 2023-07-24 20:10:39,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/2c148c579b33b0f1a5ffbbaead601e82 2023-07-24 20:10:39,928 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=53 2023-07-24 20:10:39,928 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=53, state=SUCCESS; OpenRegionProcedure 9b09f16e4ecdc5e924eeacf3f3020f66, server=jenkins-hbase4.apache.org,36167,1690229431901 in 214 msec 2023-07-24 20:10:39,928 DEBUG [StoreOpener-e521488648899ab16640bc437c7f4bdb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/e521488648899ab16640bc437c7f4bdb/f 2023-07-24 20:10:39,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/2c148c579b33b0f1a5ffbbaead601e82 2023-07-24 20:10:39,928 DEBUG [StoreOpener-e521488648899ab16640bc437c7f4bdb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/e521488648899ab16640bc437c7f4bdb/f 2023-07-24 20:10:39,929 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9ccc50987f492472e6be13902f7093bc, ASSIGN in 426 msec 2023-07-24 20:10:39,929 INFO [StoreOpener-e521488648899ab16640bc437c7f4bdb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e521488648899ab16640bc437c7f4bdb columnFamilyName f 2023-07-24 20:10:39,930 INFO [StoreOpener-e521488648899ab16640bc437c7f4bdb-1] regionserver.HStore(310): Store=e521488648899ab16640bc437c7f4bdb/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:39,930 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9b09f16e4ecdc5e924eeacf3f3020f66, ASSIGN in 427 msec 2023-07-24 20:10:39,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/e521488648899ab16640bc437c7f4bdb 2023-07-24 20:10:39,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/e521488648899ab16640bc437c7f4bdb 2023-07-24 20:10:39,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2c148c579b33b0f1a5ffbbaead601e82 2023-07-24 20:10:39,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e521488648899ab16640bc437c7f4bdb 2023-07-24 20:10:39,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/2c148c579b33b0f1a5ffbbaead601e82/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:10:39,936 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2c148c579b33b0f1a5ffbbaead601e82; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10187873920, jitterRate=-0.05118030309677124}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:39,936 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2c148c579b33b0f1a5ffbbaead601e82: 2023-07-24 20:10:39,937 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690229438911.2c148c579b33b0f1a5ffbbaead601e82., pid=58, masterSystemTime=1690229439869 2023-07-24 20:10:39,937 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/e521488648899ab16640bc437c7f4bdb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:10:39,938 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e521488648899ab16640bc437c7f4bdb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10164863360, jitterRate=-0.053323328495025635}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:39,938 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e521488648899ab16640bc437c7f4bdb: 2023-07-24 20:10:39,939 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229438911.e521488648899ab16640bc437c7f4bdb., pid=56, masterSystemTime=1690229439825 2023-07-24 20:10:39,942 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690229438911.2c148c579b33b0f1a5ffbbaead601e82. 2023-07-24 20:10:39,942 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690229438911.2c148c579b33b0f1a5ffbbaead601e82. 2023-07-24 20:10:39,944 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=2c148c579b33b0f1a5ffbbaead601e82, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:39,944 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690229438911.2c148c579b33b0f1a5ffbbaead601e82.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229439944"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229439944"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229439944"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229439944"}]},"ts":"1690229439944"} 2023-07-24 20:10:39,945 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229438911.e521488648899ab16640bc437c7f4bdb. 2023-07-24 20:10:39,945 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229438911.e521488648899ab16640bc437c7f4bdb. 2023-07-24 20:10:39,946 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=e521488648899ab16640bc437c7f4bdb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:39,946 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690229438911.e521488648899ab16640bc437c7f4bdb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229439945"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229439945"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229439945"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229439945"}]},"ts":"1690229439945"} 2023-07-24 20:10:39,950 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=51 2023-07-24 20:10:39,950 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=51, state=SUCCESS; OpenRegionProcedure 2c148c579b33b0f1a5ffbbaead601e82, server=jenkins-hbase4.apache.org,36167,1690229431901 in 260 msec 2023-07-24 20:10:39,951 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=52 2023-07-24 20:10:39,951 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=52, state=SUCCESS; OpenRegionProcedure e521488648899ab16640bc437c7f4bdb, server=jenkins-hbase4.apache.org,36633,1690229435886 in 281 msec 2023-07-24 20:10:39,952 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2c148c579b33b0f1a5ffbbaead601e82, ASSIGN in 453 msec 2023-07-24 20:10:39,953 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=52, resume processing ppid=49 2023-07-24 20:10:39,953 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e521488648899ab16640bc437c7f4bdb, ASSIGN in 451 msec 2023-07-24 20:10:39,954 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229439953"}]},"ts":"1690229439953"} 2023-07-24 20:10:39,955 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-24 20:10:39,957 DEBUG [PEWorker-5] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-24 20:10:39,960 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=49, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 1.1050 sec 2023-07-24 20:10:39,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-24 20:10:39,971 INFO [Listener at localhost/39801] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 49 completed 2023-07-24 20:10:39,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1805923026 2023-07-24 20:10:39,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:39,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1805923026 2023-07-24 20:10:39,974 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:39,975 INFO [Listener at localhost/39801] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-24 20:10:39,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-24 20:10:39,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=60, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 20:10:39,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-24 20:10:39,986 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229439986"}]},"ts":"1690229439986"} 2023-07-24 20:10:39,988 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-24 20:10:39,990 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-24 20:10:39,993 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cec4cc6e0d97c4a8a043d6b37f3b5fd8, UNASSIGN}, {pid=62, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2c148c579b33b0f1a5ffbbaead601e82, UNASSIGN}, {pid=63, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e521488648899ab16640bc437c7f4bdb, UNASSIGN}, {pid=64, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9b09f16e4ecdc5e924eeacf3f3020f66, UNASSIGN}, {pid=65, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9ccc50987f492472e6be13902f7093bc, UNASSIGN}] 2023-07-24 20:10:39,995 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=62, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2c148c579b33b0f1a5ffbbaead601e82, UNASSIGN 2023-07-24 20:10:39,996 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=65, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9ccc50987f492472e6be13902f7093bc, UNASSIGN 2023-07-24 20:10:39,996 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9b09f16e4ecdc5e924eeacf3f3020f66, UNASSIGN 2023-07-24 20:10:39,997 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=61, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cec4cc6e0d97c4a8a043d6b37f3b5fd8, UNASSIGN 2023-07-24 20:10:39,997 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=2c148c579b33b0f1a5ffbbaead601e82, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:39,997 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=9ccc50987f492472e6be13902f7093bc, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:39,998 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690229438911.2c148c579b33b0f1a5ffbbaead601e82.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229439997"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229439997"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229439997"}]},"ts":"1690229439997"} 2023-07-24 20:10:39,998 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690229438911.9ccc50987f492472e6be13902f7093bc.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229439997"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229439997"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229439997"}]},"ts":"1690229439997"} 2023-07-24 20:10:39,998 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=63, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e521488648899ab16640bc437c7f4bdb, UNASSIGN 2023-07-24 20:10:39,998 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=9b09f16e4ecdc5e924eeacf3f3020f66, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:39,999 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690229438911.9b09f16e4ecdc5e924eeacf3f3020f66.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229439998"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229439998"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229439998"}]},"ts":"1690229439998"} 2023-07-24 20:10:39,999 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=cec4cc6e0d97c4a8a043d6b37f3b5fd8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:39,999 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690229438911.cec4cc6e0d97c4a8a043d6b37f3b5fd8.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229439999"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229439999"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229439999"}]},"ts":"1690229439999"} 2023-07-24 20:10:40,001 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=66, ppid=62, state=RUNNABLE; CloseRegionProcedure 2c148c579b33b0f1a5ffbbaead601e82, server=jenkins-hbase4.apache.org,36167,1690229431901}] 2023-07-24 20:10:40,002 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=e521488648899ab16640bc437c7f4bdb, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:40,002 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690229438911.e521488648899ab16640bc437c7f4bdb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229440001"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229440001"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229440001"}]},"ts":"1690229440001"} 2023-07-24 20:10:40,002 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=65, state=RUNNABLE; CloseRegionProcedure 9ccc50987f492472e6be13902f7093bc, server=jenkins-hbase4.apache.org,36633,1690229435886}] 2023-07-24 20:10:40,004 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=68, ppid=64, state=RUNNABLE; CloseRegionProcedure 9b09f16e4ecdc5e924eeacf3f3020f66, server=jenkins-hbase4.apache.org,36167,1690229431901}] 2023-07-24 20:10:40,008 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=61, state=RUNNABLE; CloseRegionProcedure cec4cc6e0d97c4a8a043d6b37f3b5fd8, server=jenkins-hbase4.apache.org,36633,1690229435886}] 2023-07-24 20:10:40,009 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=63, state=RUNNABLE; CloseRegionProcedure e521488648899ab16640bc437c7f4bdb, server=jenkins-hbase4.apache.org,36633,1690229435886}] 2023-07-24 20:10:40,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-24 20:10:40,158 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9b09f16e4ecdc5e924eeacf3f3020f66 2023-07-24 20:10:40,158 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cec4cc6e0d97c4a8a043d6b37f3b5fd8 2023-07-24 20:10:40,159 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9b09f16e4ecdc5e924eeacf3f3020f66, disabling compactions & flushes 2023-07-24 20:10:40,159 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cec4cc6e0d97c4a8a043d6b37f3b5fd8, disabling compactions & flushes 2023-07-24 20:10:40,159 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229438911.9b09f16e4ecdc5e924eeacf3f3020f66. 2023-07-24 20:10:40,159 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690229438911.cec4cc6e0d97c4a8a043d6b37f3b5fd8. 2023-07-24 20:10:40,159 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229438911.9b09f16e4ecdc5e924eeacf3f3020f66. 2023-07-24 20:10:40,159 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690229438911.cec4cc6e0d97c4a8a043d6b37f3b5fd8. 2023-07-24 20:10:40,159 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229438911.9b09f16e4ecdc5e924eeacf3f3020f66. after waiting 0 ms 2023-07-24 20:10:40,159 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690229438911.cec4cc6e0d97c4a8a043d6b37f3b5fd8. after waiting 0 ms 2023-07-24 20:10:40,159 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690229438911.cec4cc6e0d97c4a8a043d6b37f3b5fd8. 2023-07-24 20:10:40,159 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229438911.9b09f16e4ecdc5e924eeacf3f3020f66. 2023-07-24 20:10:40,171 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/9b09f16e4ecdc5e924eeacf3f3020f66/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 20:10:40,172 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229438911.9b09f16e4ecdc5e924eeacf3f3020f66. 2023-07-24 20:10:40,172 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9b09f16e4ecdc5e924eeacf3f3020f66: 2023-07-24 20:10:40,175 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9b09f16e4ecdc5e924eeacf3f3020f66 2023-07-24 20:10:40,175 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2c148c579b33b0f1a5ffbbaead601e82 2023-07-24 20:10:40,176 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2c148c579b33b0f1a5ffbbaead601e82, disabling compactions & flushes 2023-07-24 20:10:40,176 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690229438911.2c148c579b33b0f1a5ffbbaead601e82. 2023-07-24 20:10:40,176 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690229438911.2c148c579b33b0f1a5ffbbaead601e82. 2023-07-24 20:10:40,176 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690229438911.2c148c579b33b0f1a5ffbbaead601e82. after waiting 0 ms 2023-07-24 20:10:40,176 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690229438911.2c148c579b33b0f1a5ffbbaead601e82. 2023-07-24 20:10:40,183 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=9b09f16e4ecdc5e924eeacf3f3020f66, regionState=CLOSED 2023-07-24 20:10:40,183 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690229438911.9b09f16e4ecdc5e924eeacf3f3020f66.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229440183"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229440183"}]},"ts":"1690229440183"} 2023-07-24 20:10:40,192 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=64 2023-07-24 20:10:40,192 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=64, state=SUCCESS; CloseRegionProcedure 9b09f16e4ecdc5e924eeacf3f3020f66, server=jenkins-hbase4.apache.org,36167,1690229431901 in 185 msec 2023-07-24 20:10:40,194 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9b09f16e4ecdc5e924eeacf3f3020f66, UNASSIGN in 200 msec 2023-07-24 20:10:40,198 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/cec4cc6e0d97c4a8a043d6b37f3b5fd8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 20:10:40,200 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/2c148c579b33b0f1a5ffbbaead601e82/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 20:10:40,200 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690229438911.cec4cc6e0d97c4a8a043d6b37f3b5fd8. 2023-07-24 20:10:40,200 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cec4cc6e0d97c4a8a043d6b37f3b5fd8: 2023-07-24 20:10:40,200 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690229438911.2c148c579b33b0f1a5ffbbaead601e82. 2023-07-24 20:10:40,200 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2c148c579b33b0f1a5ffbbaead601e82: 2023-07-24 20:10:40,203 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cec4cc6e0d97c4a8a043d6b37f3b5fd8 2023-07-24 20:10:40,203 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9ccc50987f492472e6be13902f7093bc 2023-07-24 20:10:40,204 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9ccc50987f492472e6be13902f7093bc, disabling compactions & flushes 2023-07-24 20:10:40,204 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690229438911.9ccc50987f492472e6be13902f7093bc. 2023-07-24 20:10:40,204 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690229438911.9ccc50987f492472e6be13902f7093bc. 2023-07-24 20:10:40,204 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690229438911.9ccc50987f492472e6be13902f7093bc. after waiting 0 ms 2023-07-24 20:10:40,204 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690229438911.9ccc50987f492472e6be13902f7093bc. 2023-07-24 20:10:40,205 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=cec4cc6e0d97c4a8a043d6b37f3b5fd8, regionState=CLOSED 2023-07-24 20:10:40,206 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690229438911.cec4cc6e0d97c4a8a043d6b37f3b5fd8.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229440205"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229440205"}]},"ts":"1690229440205"} 2023-07-24 20:10:40,207 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=2c148c579b33b0f1a5ffbbaead601e82, regionState=CLOSED 2023-07-24 20:10:40,208 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690229438911.2c148c579b33b0f1a5ffbbaead601e82.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229440207"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229440207"}]},"ts":"1690229440207"} 2023-07-24 20:10:40,208 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2c148c579b33b0f1a5ffbbaead601e82 2023-07-24 20:10:40,217 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=61 2023-07-24 20:10:40,217 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=61, state=SUCCESS; CloseRegionProcedure cec4cc6e0d97c4a8a043d6b37f3b5fd8, server=jenkins-hbase4.apache.org,36633,1690229435886 in 204 msec 2023-07-24 20:10:40,218 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=62 2023-07-24 20:10:40,218 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=62, state=SUCCESS; CloseRegionProcedure 2c148c579b33b0f1a5ffbbaead601e82, server=jenkins-hbase4.apache.org,36167,1690229431901 in 212 msec 2023-07-24 20:10:40,220 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/9ccc50987f492472e6be13902f7093bc/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 20:10:40,220 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cec4cc6e0d97c4a8a043d6b37f3b5fd8, UNASSIGN in 225 msec 2023-07-24 20:10:40,221 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690229438911.9ccc50987f492472e6be13902f7093bc. 2023-07-24 20:10:40,221 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9ccc50987f492472e6be13902f7093bc: 2023-07-24 20:10:40,221 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2c148c579b33b0f1a5ffbbaead601e82, UNASSIGN in 226 msec 2023-07-24 20:10:40,223 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9ccc50987f492472e6be13902f7093bc 2023-07-24 20:10:40,223 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e521488648899ab16640bc437c7f4bdb 2023-07-24 20:10:40,223 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=9ccc50987f492472e6be13902f7093bc, regionState=CLOSED 2023-07-24 20:10:40,224 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e521488648899ab16640bc437c7f4bdb, disabling compactions & flushes 2023-07-24 20:10:40,224 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229438911.e521488648899ab16640bc437c7f4bdb. 2023-07-24 20:10:40,224 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690229438911.9ccc50987f492472e6be13902f7093bc.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690229440223"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229440223"}]},"ts":"1690229440223"} 2023-07-24 20:10:40,224 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229438911.e521488648899ab16640bc437c7f4bdb. 2023-07-24 20:10:40,224 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229438911.e521488648899ab16640bc437c7f4bdb. after waiting 0 ms 2023-07-24 20:10:40,225 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229438911.e521488648899ab16640bc437c7f4bdb. 2023-07-24 20:10:40,232 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=67, resume processing ppid=65 2023-07-24 20:10:40,233 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=65, state=SUCCESS; CloseRegionProcedure 9ccc50987f492472e6be13902f7093bc, server=jenkins-hbase4.apache.org,36633,1690229435886 in 224 msec 2023-07-24 20:10:40,234 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testTableMoveTruncateAndDrop/e521488648899ab16640bc437c7f4bdb/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 20:10:40,235 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229438911.e521488648899ab16640bc437c7f4bdb. 2023-07-24 20:10:40,235 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e521488648899ab16640bc437c7f4bdb: 2023-07-24 20:10:40,236 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9ccc50987f492472e6be13902f7093bc, UNASSIGN in 241 msec 2023-07-24 20:10:40,237 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e521488648899ab16640bc437c7f4bdb 2023-07-24 20:10:40,238 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=e521488648899ab16640bc437c7f4bdb, regionState=CLOSED 2023-07-24 20:10:40,238 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690229438911.e521488648899ab16640bc437c7f4bdb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690229440238"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229440238"}]},"ts":"1690229440238"} 2023-07-24 20:10:40,242 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=63 2023-07-24 20:10:40,242 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=63, state=SUCCESS; CloseRegionProcedure e521488648899ab16640bc437c7f4bdb, server=jenkins-hbase4.apache.org,36633,1690229435886 in 231 msec 2023-07-24 20:10:40,245 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=63, resume processing ppid=60 2023-07-24 20:10:40,245 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e521488648899ab16640bc437c7f4bdb, UNASSIGN in 250 msec 2023-07-24 20:10:40,246 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229440246"}]},"ts":"1690229440246"} 2023-07-24 20:10:40,248 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-24 20:10:40,251 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 20:10:40,251 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-24 20:10:40,261 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=60, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 282 msec 2023-07-24 20:10:40,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-24 20:10:40,291 INFO [Listener at localhost/39801] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 60 completed 2023-07-24 20:10:40,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-24 20:10:40,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=71, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 20:10:40,311 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=71, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 20:10:40,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_1805923026' 2023-07-24 20:10:40,314 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=71, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 20:10:40,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1805923026 2023-07-24 20:10:40,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:40,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:40,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:40,331 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cec4cc6e0d97c4a8a043d6b37f3b5fd8 2023-07-24 20:10:40,331 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2c148c579b33b0f1a5ffbbaead601e82 2023-07-24 20:10:40,332 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e521488648899ab16640bc437c7f4bdb 2023-07-24 20:10:40,332 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9b09f16e4ecdc5e924eeacf3f3020f66 2023-07-24 20:10:40,332 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9ccc50987f492472e6be13902f7093bc 2023-07-24 20:10:40,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-24 20:10:40,336 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cec4cc6e0d97c4a8a043d6b37f3b5fd8/f, FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cec4cc6e0d97c4a8a043d6b37f3b5fd8/recovered.edits] 2023-07-24 20:10:40,337 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9b09f16e4ecdc5e924eeacf3f3020f66/f, FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9b09f16e4ecdc5e924eeacf3f3020f66/recovered.edits] 2023-07-24 20:10:40,337 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2c148c579b33b0f1a5ffbbaead601e82/f, FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2c148c579b33b0f1a5ffbbaead601e82/recovered.edits] 2023-07-24 20:10:40,338 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e521488648899ab16640bc437c7f4bdb/f, FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e521488648899ab16640bc437c7f4bdb/recovered.edits] 2023-07-24 20:10:40,338 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9ccc50987f492472e6be13902f7093bc/f, FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9ccc50987f492472e6be13902f7093bc/recovered.edits] 2023-07-24 20:10:40,356 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-24 20:10:40,356 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9ccc50987f492472e6be13902f7093bc/recovered.edits/4.seqid to hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/archive/data/default/Group_testTableMoveTruncateAndDrop/9ccc50987f492472e6be13902f7093bc/recovered.edits/4.seqid 2023-07-24 20:10:40,357 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2c148c579b33b0f1a5ffbbaead601e82/recovered.edits/4.seqid to hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/archive/data/default/Group_testTableMoveTruncateAndDrop/2c148c579b33b0f1a5ffbbaead601e82/recovered.edits/4.seqid 2023-07-24 20:10:40,357 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9b09f16e4ecdc5e924eeacf3f3020f66/recovered.edits/4.seqid to hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/archive/data/default/Group_testTableMoveTruncateAndDrop/9b09f16e4ecdc5e924eeacf3f3020f66/recovered.edits/4.seqid 2023-07-24 20:10:40,358 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-24 20:10:40,358 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cec4cc6e0d97c4a8a043d6b37f3b5fd8/recovered.edits/4.seqid to hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/archive/data/default/Group_testTableMoveTruncateAndDrop/cec4cc6e0d97c4a8a043d6b37f3b5fd8/recovered.edits/4.seqid 2023-07-24 20:10:40,359 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-24 20:10:40,359 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9ccc50987f492472e6be13902f7093bc 2023-07-24 20:10:40,360 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2c148c579b33b0f1a5ffbbaead601e82 2023-07-24 20:10:40,360 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9b09f16e4ecdc5e924eeacf3f3020f66 2023-07-24 20:10:40,360 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cec4cc6e0d97c4a8a043d6b37f3b5fd8 2023-07-24 20:10:40,360 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e521488648899ab16640bc437c7f4bdb/recovered.edits/4.seqid to hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/archive/data/default/Group_testTableMoveTruncateAndDrop/e521488648899ab16640bc437c7f4bdb/recovered.edits/4.seqid 2023-07-24 20:10:40,361 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 20:10:40,361 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-24 20:10:40,361 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e521488648899ab16640bc437c7f4bdb 2023-07-24 20:10:40,361 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-24 20:10:40,362 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 20:10:40,362 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-24 20:10:40,362 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 20:10:40,362 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-24 20:10:40,368 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=71, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 20:10:40,374 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-24 20:10:40,377 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-24 20:10:40,380 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=71, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 20:10:40,380 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-24 20:10:40,380 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1690229438911.cec4cc6e0d97c4a8a043d6b37f3b5fd8.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690229440380"}]},"ts":"9223372036854775807"} 2023-07-24 20:10:40,380 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690229438911.2c148c579b33b0f1a5ffbbaead601e82.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690229440380"}]},"ts":"9223372036854775807"} 2023-07-24 20:10:40,380 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690229438911.e521488648899ab16640bc437c7f4bdb.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690229440380"}]},"ts":"9223372036854775807"} 2023-07-24 20:10:40,380 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690229438911.9b09f16e4ecdc5e924eeacf3f3020f66.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690229440380"}]},"ts":"9223372036854775807"} 2023-07-24 20:10:40,380 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690229438911.9ccc50987f492472e6be13902f7093bc.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690229440380"}]},"ts":"9223372036854775807"} 2023-07-24 20:10:40,383 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-24 20:10:40,383 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => cec4cc6e0d97c4a8a043d6b37f3b5fd8, NAME => 'Group_testTableMoveTruncateAndDrop,,1690229438911.cec4cc6e0d97c4a8a043d6b37f3b5fd8.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 2c148c579b33b0f1a5ffbbaead601e82, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690229438911.2c148c579b33b0f1a5ffbbaead601e82.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => e521488648899ab16640bc437c7f4bdb, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690229438911.e521488648899ab16640bc437c7f4bdb.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 9b09f16e4ecdc5e924eeacf3f3020f66, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690229438911.9b09f16e4ecdc5e924eeacf3f3020f66.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 9ccc50987f492472e6be13902f7093bc, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690229438911.9ccc50987f492472e6be13902f7093bc.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-24 20:10:40,383 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-24 20:10:40,384 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690229440383"}]},"ts":"9223372036854775807"} 2023-07-24 20:10:40,386 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-24 20:10:40,388 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=71, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 20:10:40,390 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=71, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 87 msec 2023-07-24 20:10:40,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-24 20:10:40,438 INFO [Listener at localhost/39801] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 71 completed 2023-07-24 20:10:40,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1805923026 2023-07-24 20:10:40,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:40,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:40,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:40,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:10:40,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:10:40,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:40,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:36633] to rsgroup default 2023-07-24 20:10:40,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1805923026 2023-07-24 20:10:40,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:40,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:40,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:40,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_1805923026, current retry=0 2023-07-24 20:10:40,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36167,1690229431901, jenkins-hbase4.apache.org,36633,1690229435886] are moved back to Group_testTableMoveTruncateAndDrop_1805923026 2023-07-24 20:10:40,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_1805923026 => default 2023-07-24 20:10:40,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:40,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_1805923026 2023-07-24 20:10:40,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:40,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:40,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 20:10:40,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:10:40,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:10:40,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:10:40,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:40,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 20:10:40,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:40,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 20:10:40,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:40,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 20:10:40,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:10:40,487 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 20:10:40,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 20:10:40,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:40,492 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:40,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:10:40,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:10:40,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:40,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:40,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43005] to rsgroup master 2023-07-24 20:10:40,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:40,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 146 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60832 deadline: 1690230640501, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. 2023-07-24 20:10:40,503 WARN [Listener at localhost/39801] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 20:10:40,505 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:40,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:40,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:40,507 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:36633, jenkins-hbase4.apache.org:43675, jenkins-hbase4.apache.org:45007], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 20:10:40,507 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:10:40,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:40,536 INFO [Listener at localhost/39801] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=490 (was 420) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1413518876_17 at /127.0.0.1:60142 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=36633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=36633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=36633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp433261441-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp433261441-636 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1-prefix:jenkins-hbase4.apache.org,36633,1690229435886 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-41a50c31-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1903508758) connection to localhost/127.0.0.1:32791 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3f04a498-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp433261441-634 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/501825316.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp433261441-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1413518876_17 at /127.0.0.1:60274 [Receiving block BP-336442049-172.31.14.131-1690229426124:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp433261441-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x660b33c7-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=36633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=36633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-475340857_17 at /127.0.0.1:50244 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x660b33c7-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-336442049-172.31.14.131-1690229426124:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:52127@0x02024450 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1021474301.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp433261441-637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:52127@0x02024450-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp433261441-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:36633-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=36633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:3;jenkins-hbase4:36633 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp433261441-635-acceptor-0@466459d7-ServerConnector@10063311{HTTP/1.1, (http/1.1)}{0.0.0.0:45271} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x660b33c7-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x660b33c7-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-336442049-172.31.14.131-1690229426124:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:32791 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:52127@0x02024450-SendThread(127.0.0.1:52127) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: hconnection-0x660b33c7-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:36633Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-336442049-172.31.14.131-1690229426124:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1413518876_17 at /127.0.0.1:58304 [Receiving block BP-336442049-172.31.14.131-1690229426124:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3f04a498-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1413518876_17 at /127.0.0.1:50258 [Receiving block BP-336442049-172.31.14.131-1690229426124:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x660b33c7-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=761 (was 684) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=393 (was 401), ProcessCount=176 (was 176), AvailableMemoryMB=6271 (was 6655) 2023-07-24 20:10:40,553 INFO [Listener at localhost/39801] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=490, OpenFileDescriptor=761, MaxFileDescriptor=60000, SystemLoadAverage=393, ProcessCount=176, AvailableMemoryMB=6270 2023-07-24 20:10:40,555 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-24 20:10:40,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:40,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:40,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:10:40,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:10:40,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:40,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 20:10:40,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:40,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 20:10:40,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:40,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 20:10:40,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:10:40,577 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 20:10:40,578 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 20:10:40,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:40,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:40,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:10:40,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:10:40,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:40,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:40,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43005] to rsgroup master 2023-07-24 20:10:40,592 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:40,592 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 174 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60832 deadline: 1690230640592, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. 2023-07-24 20:10:40,593 WARN [Listener at localhost/39801] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 20:10:40,595 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:40,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:40,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:40,596 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:36633, jenkins-hbase4.apache.org:43675, jenkins-hbase4.apache.org:45007], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 20:10:40,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:10:40,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:40,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-24 20:10:40,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:40,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 180 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:60832 deadline: 1690230640598, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-24 20:10:40,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-24 20:10:40,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:40,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 182 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:60832 deadline: 1690230640600, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-24 20:10:40,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-24 20:10:40,601 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:40,601 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 184 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:60832 deadline: 1690230640601, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-24 20:10:40,603 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-24 20:10:40,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-24 20:10:40,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:40,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:40,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:40,616 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:10:40,619 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:40,619 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:40,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:40,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:40,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:10:40,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:10:40,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:40,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 20:10:40,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:40,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-24 20:10:40,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:40,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:40,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 20:10:40,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:10:40,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:10:40,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:10:40,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:40,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 20:10:40,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:40,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 20:10:40,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:40,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 20:10:40,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:10:40,651 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 20:10:40,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 20:10:40,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:40,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:40,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:10:40,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:10:40,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:40,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:40,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43005] to rsgroup master 2023-07-24 20:10:40,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:40,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 218 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60832 deadline: 1690230640668, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. 2023-07-24 20:10:40,669 WARN [Listener at localhost/39801] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 20:10:40,670 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:40,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:40,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:40,672 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:36633, jenkins-hbase4.apache.org:43675, jenkins-hbase4.apache.org:45007], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 20:10:40,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:10:40,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:40,691 INFO [Listener at localhost/39801] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=495 (was 490) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/cluster_c9a4a1de-e8c8-db94-f8a1-60d638b30590/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3f04a498-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/cluster_c9a4a1de-e8c8-db94-f8a1-60d638b30590/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3f04a498-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3f04a498-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=761 (was 761), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=393 (was 393), ProcessCount=176 (was 176), AvailableMemoryMB=6263 (was 6270) 2023-07-24 20:10:40,710 INFO [Listener at localhost/39801] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=495, OpenFileDescriptor=761, MaxFileDescriptor=60000, SystemLoadAverage=393, ProcessCount=176, AvailableMemoryMB=6261 2023-07-24 20:10:40,710 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-24 20:10:40,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:40,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:40,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:10:40,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:10:40,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:40,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 20:10:40,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:40,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 20:10:40,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:40,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 20:10:40,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:10:40,732 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 20:10:40,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 20:10:40,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:40,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:40,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:10:40,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:10:40,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:40,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:40,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43005] to rsgroup master 2023-07-24 20:10:40,749 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:40,749 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 246 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60832 deadline: 1690230640749, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. 2023-07-24 20:10:40,750 WARN [Listener at localhost/39801] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 20:10:40,752 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:40,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:40,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:40,753 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:36633, jenkins-hbase4.apache.org:43675, jenkins-hbase4.apache.org:45007], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 20:10:40,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:10:40,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:40,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:40,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:40,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:10:40,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:40,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-24 20:10:40,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:40,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 20:10:40,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:40,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:40,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:10:40,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:40,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:40,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:43675, jenkins-hbase4.apache.org:36633] to rsgroup bar 2023-07-24 20:10:40,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:40,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 20:10:40,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:40,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:40,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(238): Moving server region 05a513489858765cac84f66c0c371109, which do not belong to RSGroup bar 2023-07-24 20:10:40,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=05a513489858765cac84f66c0c371109, REOPEN/MOVE 2023-07-24 20:10:40,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-24 20:10:40,789 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=05a513489858765cac84f66c0c371109, REOPEN/MOVE 2023-07-24 20:10:40,790 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=05a513489858765cac84f66c0c371109, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:40,790 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690229440790"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229440790"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229440790"}]},"ts":"1690229440790"} 2023-07-24 20:10:40,792 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=72, state=RUNNABLE; CloseRegionProcedure 05a513489858765cac84f66c0c371109, server=jenkins-hbase4.apache.org,43675,1690229432113}] 2023-07-24 20:10:40,945 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 05a513489858765cac84f66c0c371109 2023-07-24 20:10:40,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 05a513489858765cac84f66c0c371109, disabling compactions & flushes 2023-07-24 20:10:40,947 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109. 2023-07-24 20:10:40,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109. 2023-07-24 20:10:40,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109. after waiting 0 ms 2023-07-24 20:10:40,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109. 2023-07-24 20:10:40,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 05a513489858765cac84f66c0c371109 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-24 20:10:41,059 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/namespace/05a513489858765cac84f66c0c371109/.tmp/info/4b0d8f02a0ee477787320ffa49b68cdc 2023-07-24 20:10:41,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/namespace/05a513489858765cac84f66c0c371109/.tmp/info/4b0d8f02a0ee477787320ffa49b68cdc as hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/namespace/05a513489858765cac84f66c0c371109/info/4b0d8f02a0ee477787320ffa49b68cdc 2023-07-24 20:10:41,142 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/namespace/05a513489858765cac84f66c0c371109/info/4b0d8f02a0ee477787320ffa49b68cdc, entries=2, sequenceid=6, filesize=4.8 K 2023-07-24 20:10:41,145 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 05a513489858765cac84f66c0c371109 in 197ms, sequenceid=6, compaction requested=false 2023-07-24 20:10:41,159 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/namespace/05a513489858765cac84f66c0c371109/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-24 20:10:41,161 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109. 2023-07-24 20:10:41,161 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 05a513489858765cac84f66c0c371109: 2023-07-24 20:10:41,162 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 05a513489858765cac84f66c0c371109 move to jenkins-hbase4.apache.org,45007,1690229432026 record at close sequenceid=6 2023-07-24 20:10:41,170 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 05a513489858765cac84f66c0c371109 2023-07-24 20:10:41,171 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=05a513489858765cac84f66c0c371109, regionState=CLOSED 2023-07-24 20:10:41,172 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690229441171"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229441171"}]},"ts":"1690229441171"} 2023-07-24 20:10:41,176 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=72 2023-07-24 20:10:41,176 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=72, state=SUCCESS; CloseRegionProcedure 05a513489858765cac84f66c0c371109, server=jenkins-hbase4.apache.org,43675,1690229432113 in 382 msec 2023-07-24 20:10:41,177 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=05a513489858765cac84f66c0c371109, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45007,1690229432026; forceNewPlan=false, retain=false 2023-07-24 20:10:41,328 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=05a513489858765cac84f66c0c371109, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:41,328 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690229441328"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229441328"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229441328"}]},"ts":"1690229441328"} 2023-07-24 20:10:41,330 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=74, ppid=72, state=RUNNABLE; OpenRegionProcedure 05a513489858765cac84f66c0c371109, server=jenkins-hbase4.apache.org,45007,1690229432026}] 2023-07-24 20:10:41,491 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109. 2023-07-24 20:10:41,491 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 05a513489858765cac84f66c0c371109, NAME => 'hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109.', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:10:41,491 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 05a513489858765cac84f66c0c371109 2023-07-24 20:10:41,491 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:41,492 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 05a513489858765cac84f66c0c371109 2023-07-24 20:10:41,492 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 05a513489858765cac84f66c0c371109 2023-07-24 20:10:41,493 INFO [StoreOpener-05a513489858765cac84f66c0c371109-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 05a513489858765cac84f66c0c371109 2023-07-24 20:10:41,495 DEBUG [StoreOpener-05a513489858765cac84f66c0c371109-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/namespace/05a513489858765cac84f66c0c371109/info 2023-07-24 20:10:41,495 DEBUG [StoreOpener-05a513489858765cac84f66c0c371109-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/namespace/05a513489858765cac84f66c0c371109/info 2023-07-24 20:10:41,495 INFO [StoreOpener-05a513489858765cac84f66c0c371109-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 05a513489858765cac84f66c0c371109 columnFamilyName info 2023-07-24 20:10:41,507 DEBUG [StoreOpener-05a513489858765cac84f66c0c371109-1] regionserver.HStore(539): loaded hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/namespace/05a513489858765cac84f66c0c371109/info/4b0d8f02a0ee477787320ffa49b68cdc 2023-07-24 20:10:41,507 INFO [StoreOpener-05a513489858765cac84f66c0c371109-1] regionserver.HStore(310): Store=05a513489858765cac84f66c0c371109/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:41,512 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/namespace/05a513489858765cac84f66c0c371109 2023-07-24 20:10:41,514 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/namespace/05a513489858765cac84f66c0c371109 2023-07-24 20:10:41,519 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 05a513489858765cac84f66c0c371109 2023-07-24 20:10:41,520 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 05a513489858765cac84f66c0c371109; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11939184800, jitterRate=0.1119232326745987}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:41,520 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 05a513489858765cac84f66c0c371109: 2023-07-24 20:10:41,522 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109., pid=74, masterSystemTime=1690229441482 2023-07-24 20:10:41,524 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109. 2023-07-24 20:10:41,524 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109. 2023-07-24 20:10:41,525 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=05a513489858765cac84f66c0c371109, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:41,525 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690229441525"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229441525"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229441525"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229441525"}]},"ts":"1690229441525"} 2023-07-24 20:10:41,529 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=74, resume processing ppid=72 2023-07-24 20:10:41,529 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=72, state=SUCCESS; OpenRegionProcedure 05a513489858765cac84f66c0c371109, server=jenkins-hbase4.apache.org,45007,1690229432026 in 197 msec 2023-07-24 20:10:41,531 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=72, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=05a513489858765cac84f66c0c371109, REOPEN/MOVE in 742 msec 2023-07-24 20:10:41,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure.ProcedureSyncWait(216): waitFor pid=72 2023-07-24 20:10:41,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36167,1690229431901, jenkins-hbase4.apache.org,36633,1690229435886, jenkins-hbase4.apache.org,43675,1690229432113] are moved back to default 2023-07-24 20:10:41,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-24 20:10:41,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:41,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:41,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:41,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-24 20:10:41,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:41,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 20:10:41,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=75, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-24 20:10:41,804 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 20:10:41,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 75 2023-07-24 20:10:41,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=75 2023-07-24 20:10:41,807 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:41,808 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 20:10:41,808 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:41,809 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:41,814 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 20:10:41,816 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testFailRemoveGroup/62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:41,817 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testFailRemoveGroup/62c86f0f4ddc8212a5cf0ab60e03c7e4 empty. 2023-07-24 20:10:41,817 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testFailRemoveGroup/62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:41,822 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-24 20:10:41,864 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-24 20:10:41,866 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 62c86f0f4ddc8212a5cf0ab60e03c7e4, NAME => 'Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp 2023-07-24 20:10:41,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=75 2023-07-24 20:10:42,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=75 2023-07-24 20:10:42,294 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:42,294 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing 62c86f0f4ddc8212a5cf0ab60e03c7e4, disabling compactions & flushes 2023-07-24 20:10:42,294 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. 2023-07-24 20:10:42,294 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. 2023-07-24 20:10:42,294 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. after waiting 0 ms 2023-07-24 20:10:42,294 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. 2023-07-24 20:10:42,294 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. 2023-07-24 20:10:42,294 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for 62c86f0f4ddc8212a5cf0ab60e03c7e4: 2023-07-24 20:10:42,298 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 20:10:42,299 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690229442299"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229442299"}]},"ts":"1690229442299"} 2023-07-24 20:10:42,302 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 20:10:42,303 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 20:10:42,304 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229442303"}]},"ts":"1690229442303"} 2023-07-24 20:10:42,305 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-24 20:10:42,309 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=62c86f0f4ddc8212a5cf0ab60e03c7e4, ASSIGN}] 2023-07-24 20:10:42,311 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=76, ppid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=62c86f0f4ddc8212a5cf0ab60e03c7e4, ASSIGN 2023-07-24 20:10:42,312 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=76, ppid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=62c86f0f4ddc8212a5cf0ab60e03c7e4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45007,1690229432026; forceNewPlan=false, retain=false 2023-07-24 20:10:42,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=75 2023-07-24 20:10:42,464 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=76 updating hbase:meta row=62c86f0f4ddc8212a5cf0ab60e03c7e4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:42,464 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690229442464"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229442464"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229442464"}]},"ts":"1690229442464"} 2023-07-24 20:10:42,469 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=76, state=RUNNABLE; OpenRegionProcedure 62c86f0f4ddc8212a5cf0ab60e03c7e4, server=jenkins-hbase4.apache.org,45007,1690229432026}] 2023-07-24 20:10:42,626 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. 2023-07-24 20:10:42,626 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 62c86f0f4ddc8212a5cf0ab60e03c7e4, NAME => 'Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4.', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:10:42,627 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:42,627 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:42,627 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:42,627 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:42,629 INFO [StoreOpener-62c86f0f4ddc8212a5cf0ab60e03c7e4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:42,631 DEBUG [StoreOpener-62c86f0f4ddc8212a5cf0ab60e03c7e4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testFailRemoveGroup/62c86f0f4ddc8212a5cf0ab60e03c7e4/f 2023-07-24 20:10:42,631 DEBUG [StoreOpener-62c86f0f4ddc8212a5cf0ab60e03c7e4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testFailRemoveGroup/62c86f0f4ddc8212a5cf0ab60e03c7e4/f 2023-07-24 20:10:42,631 INFO [StoreOpener-62c86f0f4ddc8212a5cf0ab60e03c7e4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 62c86f0f4ddc8212a5cf0ab60e03c7e4 columnFamilyName f 2023-07-24 20:10:42,632 INFO [StoreOpener-62c86f0f4ddc8212a5cf0ab60e03c7e4-1] regionserver.HStore(310): Store=62c86f0f4ddc8212a5cf0ab60e03c7e4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:42,633 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testFailRemoveGroup/62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:42,633 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testFailRemoveGroup/62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:42,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:42,640 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testFailRemoveGroup/62c86f0f4ddc8212a5cf0ab60e03c7e4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:10:42,641 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 62c86f0f4ddc8212a5cf0ab60e03c7e4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10666143200, jitterRate=-0.006638005375862122}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:42,641 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 62c86f0f4ddc8212a5cf0ab60e03c7e4: 2023-07-24 20:10:42,642 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4., pid=77, masterSystemTime=1690229442621 2023-07-24 20:10:42,644 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. 2023-07-24 20:10:42,644 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. 2023-07-24 20:10:42,644 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=76 updating hbase:meta row=62c86f0f4ddc8212a5cf0ab60e03c7e4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:42,645 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690229442644"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229442644"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229442644"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229442644"}]},"ts":"1690229442644"} 2023-07-24 20:10:42,648 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=76 2023-07-24 20:10:42,649 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=76, state=SUCCESS; OpenRegionProcedure 62c86f0f4ddc8212a5cf0ab60e03c7e4, server=jenkins-hbase4.apache.org,45007,1690229432026 in 180 msec 2023-07-24 20:10:42,651 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=75 2023-07-24 20:10:42,651 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=75, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=62c86f0f4ddc8212a5cf0ab60e03c7e4, ASSIGN in 340 msec 2023-07-24 20:10:42,651 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 20:10:42,652 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229442652"}]},"ts":"1690229442652"} 2023-07-24 20:10:42,659 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-24 20:10:42,661 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 20:10:42,663 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=75, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 861 msec 2023-07-24 20:10:42,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=75 2023-07-24 20:10:42,912 INFO [Listener at localhost/39801] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 75 completed 2023-07-24 20:10:42,912 DEBUG [Listener at localhost/39801] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-24 20:10:42,912 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:42,918 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-24 20:10:42,918 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:42,918 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-24 20:10:42,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-24 20:10:42,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:42,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 20:10:42,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:42,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:42,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-24 20:10:42,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(345): Moving region 62c86f0f4ddc8212a5cf0ab60e03c7e4 to RSGroup bar 2023-07-24 20:10:42,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 20:10:42,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 20:10:42,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 20:10:42,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 20:10:42,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-24 20:10:42,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 20:10:42,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=62c86f0f4ddc8212a5cf0ab60e03c7e4, REOPEN/MOVE 2023-07-24 20:10:42,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-24 20:10:42,931 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=62c86f0f4ddc8212a5cf0ab60e03c7e4, REOPEN/MOVE 2023-07-24 20:10:42,932 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=62c86f0f4ddc8212a5cf0ab60e03c7e4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:42,932 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690229442932"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229442932"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229442932"}]},"ts":"1690229442932"} 2023-07-24 20:10:42,934 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=78, state=RUNNABLE; CloseRegionProcedure 62c86f0f4ddc8212a5cf0ab60e03c7e4, server=jenkins-hbase4.apache.org,45007,1690229432026}] 2023-07-24 20:10:43,093 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:43,094 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 62c86f0f4ddc8212a5cf0ab60e03c7e4, disabling compactions & flushes 2023-07-24 20:10:43,095 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. 2023-07-24 20:10:43,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. 2023-07-24 20:10:43,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. after waiting 0 ms 2023-07-24 20:10:43,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. 2023-07-24 20:10:43,103 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testFailRemoveGroup/62c86f0f4ddc8212a5cf0ab60e03c7e4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 20:10:43,104 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. 2023-07-24 20:10:43,104 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 62c86f0f4ddc8212a5cf0ab60e03c7e4: 2023-07-24 20:10:43,104 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 62c86f0f4ddc8212a5cf0ab60e03c7e4 move to jenkins-hbase4.apache.org,36167,1690229431901 record at close sequenceid=2 2023-07-24 20:10:43,106 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:43,107 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=62c86f0f4ddc8212a5cf0ab60e03c7e4, regionState=CLOSED 2023-07-24 20:10:43,107 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690229443107"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229443107"}]},"ts":"1690229443107"} 2023-07-24 20:10:43,110 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=78 2023-07-24 20:10:43,110 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=78, state=SUCCESS; CloseRegionProcedure 62c86f0f4ddc8212a5cf0ab60e03c7e4, server=jenkins-hbase4.apache.org,45007,1690229432026 in 175 msec 2023-07-24 20:10:43,111 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=62c86f0f4ddc8212a5cf0ab60e03c7e4, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36167,1690229431901; forceNewPlan=false, retain=false 2023-07-24 20:10:43,261 INFO [jenkins-hbase4:43005] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 20:10:43,262 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=62c86f0f4ddc8212a5cf0ab60e03c7e4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:43,262 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690229443262"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229443262"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229443262"}]},"ts":"1690229443262"} 2023-07-24 20:10:43,265 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=78, state=RUNNABLE; OpenRegionProcedure 62c86f0f4ddc8212a5cf0ab60e03c7e4, server=jenkins-hbase4.apache.org,36167,1690229431901}] 2023-07-24 20:10:43,422 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. 2023-07-24 20:10:43,422 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 62c86f0f4ddc8212a5cf0ab60e03c7e4, NAME => 'Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4.', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:10:43,422 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:43,423 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:43,423 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:43,423 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:43,425 INFO [StoreOpener-62c86f0f4ddc8212a5cf0ab60e03c7e4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:43,426 DEBUG [StoreOpener-62c86f0f4ddc8212a5cf0ab60e03c7e4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testFailRemoveGroup/62c86f0f4ddc8212a5cf0ab60e03c7e4/f 2023-07-24 20:10:43,426 DEBUG [StoreOpener-62c86f0f4ddc8212a5cf0ab60e03c7e4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testFailRemoveGroup/62c86f0f4ddc8212a5cf0ab60e03c7e4/f 2023-07-24 20:10:43,427 INFO [StoreOpener-62c86f0f4ddc8212a5cf0ab60e03c7e4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 62c86f0f4ddc8212a5cf0ab60e03c7e4 columnFamilyName f 2023-07-24 20:10:43,430 INFO [StoreOpener-62c86f0f4ddc8212a5cf0ab60e03c7e4-1] regionserver.HStore(310): Store=62c86f0f4ddc8212a5cf0ab60e03c7e4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:43,431 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testFailRemoveGroup/62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:43,433 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testFailRemoveGroup/62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:43,437 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:43,438 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 62c86f0f4ddc8212a5cf0ab60e03c7e4; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9682383840, jitterRate=-0.09825773537158966}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:43,438 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 62c86f0f4ddc8212a5cf0ab60e03c7e4: 2023-07-24 20:10:43,439 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4., pid=80, masterSystemTime=1690229443417 2023-07-24 20:10:43,441 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. 2023-07-24 20:10:43,441 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. 2023-07-24 20:10:43,444 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=62c86f0f4ddc8212a5cf0ab60e03c7e4, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:43,444 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690229443444"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229443444"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229443444"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229443444"}]},"ts":"1690229443444"} 2023-07-24 20:10:43,448 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=78 2023-07-24 20:10:43,449 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=78, state=SUCCESS; OpenRegionProcedure 62c86f0f4ddc8212a5cf0ab60e03c7e4, server=jenkins-hbase4.apache.org,36167,1690229431901 in 181 msec 2023-07-24 20:10:43,450 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=78, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=62c86f0f4ddc8212a5cf0ab60e03c7e4, REOPEN/MOVE in 520 msec 2023-07-24 20:10:43,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure.ProcedureSyncWait(216): waitFor pid=78 2023-07-24 20:10:43,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-24 20:10:43,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:43,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:43,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:43,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-24 20:10:43,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:43,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-24 20:10:43,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:43,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 286 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:60832 deadline: 1690230643940, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-24 20:10:43,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:43675, jenkins-hbase4.apache.org:36633] to rsgroup default 2023-07-24 20:10:43,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:43,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 288 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:60832 deadline: 1690230643942, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-24 20:10:43,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-24 20:10:43,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:43,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 20:10:43,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:43,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:43,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-24 20:10:43,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(345): Moving region 62c86f0f4ddc8212a5cf0ab60e03c7e4 to RSGroup default 2023-07-24 20:10:43,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=62c86f0f4ddc8212a5cf0ab60e03c7e4, REOPEN/MOVE 2023-07-24 20:10:43,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-24 20:10:43,952 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=62c86f0f4ddc8212a5cf0ab60e03c7e4, REOPEN/MOVE 2023-07-24 20:10:43,953 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=62c86f0f4ddc8212a5cf0ab60e03c7e4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:43,953 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690229443953"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229443953"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229443953"}]},"ts":"1690229443953"} 2023-07-24 20:10:43,954 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE; CloseRegionProcedure 62c86f0f4ddc8212a5cf0ab60e03c7e4, server=jenkins-hbase4.apache.org,36167,1690229431901}] 2023-07-24 20:10:44,107 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:44,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 62c86f0f4ddc8212a5cf0ab60e03c7e4, disabling compactions & flushes 2023-07-24 20:10:44,108 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. 2023-07-24 20:10:44,109 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. 2023-07-24 20:10:44,109 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. after waiting 0 ms 2023-07-24 20:10:44,109 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. 2023-07-24 20:10:44,113 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testFailRemoveGroup/62c86f0f4ddc8212a5cf0ab60e03c7e4/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 20:10:44,114 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. 2023-07-24 20:10:44,114 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 62c86f0f4ddc8212a5cf0ab60e03c7e4: 2023-07-24 20:10:44,114 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 62c86f0f4ddc8212a5cf0ab60e03c7e4 move to jenkins-hbase4.apache.org,45007,1690229432026 record at close sequenceid=5 2023-07-24 20:10:44,116 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:44,116 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=62c86f0f4ddc8212a5cf0ab60e03c7e4, regionState=CLOSED 2023-07-24 20:10:44,116 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690229444116"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229444116"}]},"ts":"1690229444116"} 2023-07-24 20:10:44,119 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-24 20:10:44,120 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; CloseRegionProcedure 62c86f0f4ddc8212a5cf0ab60e03c7e4, server=jenkins-hbase4.apache.org,36167,1690229431901 in 164 msec 2023-07-24 20:10:44,120 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=62c86f0f4ddc8212a5cf0ab60e03c7e4, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45007,1690229432026; forceNewPlan=false, retain=false 2023-07-24 20:10:44,271 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=62c86f0f4ddc8212a5cf0ab60e03c7e4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:44,271 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690229444271"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229444271"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229444271"}]},"ts":"1690229444271"} 2023-07-24 20:10:44,273 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=81, state=RUNNABLE; OpenRegionProcedure 62c86f0f4ddc8212a5cf0ab60e03c7e4, server=jenkins-hbase4.apache.org,45007,1690229432026}] 2023-07-24 20:10:44,429 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. 2023-07-24 20:10:44,430 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 62c86f0f4ddc8212a5cf0ab60e03c7e4, NAME => 'Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4.', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:10:44,430 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:44,430 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:44,430 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:44,430 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:44,431 INFO [StoreOpener-62c86f0f4ddc8212a5cf0ab60e03c7e4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:44,433 DEBUG [StoreOpener-62c86f0f4ddc8212a5cf0ab60e03c7e4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testFailRemoveGroup/62c86f0f4ddc8212a5cf0ab60e03c7e4/f 2023-07-24 20:10:44,433 DEBUG [StoreOpener-62c86f0f4ddc8212a5cf0ab60e03c7e4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testFailRemoveGroup/62c86f0f4ddc8212a5cf0ab60e03c7e4/f 2023-07-24 20:10:44,433 INFO [StoreOpener-62c86f0f4ddc8212a5cf0ab60e03c7e4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 62c86f0f4ddc8212a5cf0ab60e03c7e4 columnFamilyName f 2023-07-24 20:10:44,434 INFO [StoreOpener-62c86f0f4ddc8212a5cf0ab60e03c7e4-1] regionserver.HStore(310): Store=62c86f0f4ddc8212a5cf0ab60e03c7e4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:44,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testFailRemoveGroup/62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:44,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testFailRemoveGroup/62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:44,440 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:44,441 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 62c86f0f4ddc8212a5cf0ab60e03c7e4; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10821960480, jitterRate=0.00787360966205597}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:44,441 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 62c86f0f4ddc8212a5cf0ab60e03c7e4: 2023-07-24 20:10:44,442 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4., pid=83, masterSystemTime=1690229444425 2023-07-24 20:10:44,443 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. 2023-07-24 20:10:44,443 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. 2023-07-24 20:10:44,444 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=62c86f0f4ddc8212a5cf0ab60e03c7e4, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:44,444 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690229444444"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229444444"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229444444"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229444444"}]},"ts":"1690229444444"} 2023-07-24 20:10:44,447 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=81 2023-07-24 20:10:44,447 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=81, state=SUCCESS; OpenRegionProcedure 62c86f0f4ddc8212a5cf0ab60e03c7e4, server=jenkins-hbase4.apache.org,45007,1690229432026 in 172 msec 2023-07-24 20:10:44,448 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=62c86f0f4ddc8212a5cf0ab60e03c7e4, REOPEN/MOVE in 497 msec 2023-07-24 20:10:44,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure.ProcedureSyncWait(216): waitFor pid=81 2023-07-24 20:10:44,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-24 20:10:44,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:44,957 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:44,957 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:44,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-24 20:10:44,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:44,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 295 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:60832 deadline: 1690230644961, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-24 20:10:44,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:43675, jenkins-hbase4.apache.org:36633] to rsgroup default 2023-07-24 20:10:44,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:44,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 20:10:44,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:44,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:44,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-24 20:10:44,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36167,1690229431901, jenkins-hbase4.apache.org,36633,1690229435886, jenkins-hbase4.apache.org,43675,1690229432113] are moved back to bar 2023-07-24 20:10:44,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-24 20:10:44,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:44,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:44,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:44,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-24 20:10:44,981 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43675] ipc.CallRunner(144): callId: 203 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:35390 deadline: 1690229504980, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45007 startCode=1690229432026. As of locationSeqNum=6. 2023-07-24 20:10:45,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:45,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:45,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 20:10:45,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:10:45,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:45,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:45,117 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:45,117 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:45,120 INFO [Listener at localhost/39801] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-24 20:10:45,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-24 20:10:45,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-24 20:10:45,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-24 20:10:45,126 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229445126"}]},"ts":"1690229445126"} 2023-07-24 20:10:45,127 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-24 20:10:45,130 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-24 20:10:45,131 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=62c86f0f4ddc8212a5cf0ab60e03c7e4, UNASSIGN}] 2023-07-24 20:10:45,133 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=62c86f0f4ddc8212a5cf0ab60e03c7e4, UNASSIGN 2023-07-24 20:10:45,134 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=62c86f0f4ddc8212a5cf0ab60e03c7e4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:45,134 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690229445134"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229445134"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229445134"}]},"ts":"1690229445134"} 2023-07-24 20:10:45,136 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=85, state=RUNNABLE; CloseRegionProcedure 62c86f0f4ddc8212a5cf0ab60e03c7e4, server=jenkins-hbase4.apache.org,45007,1690229432026}] 2023-07-24 20:10:45,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-24 20:10:45,288 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:45,289 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 62c86f0f4ddc8212a5cf0ab60e03c7e4, disabling compactions & flushes 2023-07-24 20:10:45,289 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. 2023-07-24 20:10:45,289 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. 2023-07-24 20:10:45,289 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. after waiting 0 ms 2023-07-24 20:10:45,289 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. 2023-07-24 20:10:45,294 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testFailRemoveGroup/62c86f0f4ddc8212a5cf0ab60e03c7e4/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-24 20:10:45,295 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4. 2023-07-24 20:10:45,295 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 62c86f0f4ddc8212a5cf0ab60e03c7e4: 2023-07-24 20:10:45,297 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:45,297 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=62c86f0f4ddc8212a5cf0ab60e03c7e4, regionState=CLOSED 2023-07-24 20:10:45,298 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690229445297"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229445297"}]},"ts":"1690229445297"} 2023-07-24 20:10:45,301 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=85 2023-07-24 20:10:45,301 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=85, state=SUCCESS; CloseRegionProcedure 62c86f0f4ddc8212a5cf0ab60e03c7e4, server=jenkins-hbase4.apache.org,45007,1690229432026 in 163 msec 2023-07-24 20:10:45,303 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-24 20:10:45,303 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=62c86f0f4ddc8212a5cf0ab60e03c7e4, UNASSIGN in 170 msec 2023-07-24 20:10:45,304 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229445304"}]},"ts":"1690229445304"} 2023-07-24 20:10:45,306 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-24 20:10:45,314 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-24 20:10:45,316 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 194 msec 2023-07-24 20:10:45,335 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 20:10:45,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-24 20:10:45,429 INFO [Listener at localhost/39801] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 84 completed 2023-07-24 20:10:45,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-24 20:10:45,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 20:10:45,433 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=87, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 20:10:45,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-24 20:10:45,435 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=87, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 20:10:45,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:45,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:45,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:10:45,440 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testFailRemoveGroup/62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:45,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-24 20:10:45,442 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testFailRemoveGroup/62c86f0f4ddc8212a5cf0ab60e03c7e4/f, FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testFailRemoveGroup/62c86f0f4ddc8212a5cf0ab60e03c7e4/recovered.edits] 2023-07-24 20:10:45,451 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testFailRemoveGroup/62c86f0f4ddc8212a5cf0ab60e03c7e4/recovered.edits/10.seqid to hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/archive/data/default/Group_testFailRemoveGroup/62c86f0f4ddc8212a5cf0ab60e03c7e4/recovered.edits/10.seqid 2023-07-24 20:10:45,452 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testFailRemoveGroup/62c86f0f4ddc8212a5cf0ab60e03c7e4 2023-07-24 20:10:45,452 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-24 20:10:45,457 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=87, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 20:10:45,460 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-24 20:10:45,463 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-24 20:10:45,464 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=87, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 20:10:45,465 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-24 20:10:45,465 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690229445465"}]},"ts":"9223372036854775807"} 2023-07-24 20:10:45,467 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 20:10:45,467 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 62c86f0f4ddc8212a5cf0ab60e03c7e4, NAME => 'Group_testFailRemoveGroup,,1690229441800.62c86f0f4ddc8212a5cf0ab60e03c7e4.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 20:10:45,467 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-24 20:10:45,467 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690229445467"}]},"ts":"9223372036854775807"} 2023-07-24 20:10:45,469 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-24 20:10:45,472 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=87, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 20:10:45,473 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 42 msec 2023-07-24 20:10:45,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-24 20:10:45,543 INFO [Listener at localhost/39801] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 87 completed 2023-07-24 20:10:45,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:45,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:45,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:10:45,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:10:45,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:45,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 20:10:45,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:45,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 20:10:45,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:45,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 20:10:45,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:10:45,559 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 20:10:45,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 20:10:45,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:45,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:45,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:10:45,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:10:45,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:45,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:45,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43005] to rsgroup master 2023-07-24 20:10:45,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:45,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 343 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60832 deadline: 1690230645577, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. 2023-07-24 20:10:45,579 WARN [Listener at localhost/39801] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 20:10:45,581 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:45,581 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:45,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:45,582 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:36633, jenkins-hbase4.apache.org:43675, jenkins-hbase4.apache.org:45007], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 20:10:45,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:10:45,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:45,605 INFO [Listener at localhost/39801] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=497 (was 495) Potentially hanging thread: hconnection-0x3f04a498-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-475340857_17 at /127.0.0.1:60142 [Waiting for operation #10] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7619449a-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3f04a498-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x660b33c7-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x660b33c7-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/cluster_c9a4a1de-e8c8-db94-f8a1-60d638b30590/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3f04a498-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x660b33c7-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3f04a498-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x660b33c7-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x660b33c7-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1499272638_17 at /127.0.0.1:38638 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/cluster_c9a4a1de-e8c8-db94-f8a1-60d638b30590/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x660b33c7-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3f04a498-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=762 (was 761) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=394 (was 393) - SystemLoadAverage LEAK? -, ProcessCount=176 (was 176), AvailableMemoryMB=5956 (was 6261) 2023-07-24 20:10:45,626 INFO [Listener at localhost/39801] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=497, OpenFileDescriptor=762, MaxFileDescriptor=60000, SystemLoadAverage=394, ProcessCount=176, AvailableMemoryMB=5955 2023-07-24 20:10:45,627 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-24 20:10:45,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:45,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:45,633 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:10:45,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:10:45,633 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:45,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 20:10:45,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:45,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 20:10:45,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:45,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 20:10:45,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:10:45,645 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 20:10:45,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 20:10:45,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:45,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:45,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:10:45,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:10:45,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:45,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:45,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43005] to rsgroup master 2023-07-24 20:10:45,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:45,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 371 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60832 deadline: 1690230645664, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. 2023-07-24 20:10:45,665 WARN [Listener at localhost/39801] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 20:10:45,669 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:45,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:45,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:45,671 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:36633, jenkins-hbase4.apache.org:43675, jenkins-hbase4.apache.org:45007], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 20:10:45,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:10:45,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:45,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:10:45,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:45,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_42167359 2023-07-24 20:10:45,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_42167359 2023-07-24 20:10:45,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:45,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:45,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:45,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:10:45,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:45,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:45,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36167] to rsgroup Group_testMultiTableMove_42167359 2023-07-24 20:10:45,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_42167359 2023-07-24 20:10:45,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:45,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:45,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:45,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 20:10:45,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36167,1690229431901] are moved back to default 2023-07-24 20:10:45,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_42167359 2023-07-24 20:10:45,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:45,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:45,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:45,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_42167359 2023-07-24 20:10:45,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:45,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 20:10:45,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=88, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 20:10:45,703 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 20:10:45,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 88 2023-07-24 20:10:45,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-24 20:10:45,705 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_42167359 2023-07-24 20:10:45,706 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:45,706 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:45,707 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:45,709 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 20:10:45,710 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/GrouptestMultiTableMoveA/a750a9c1125194728c194b99fb0ce8c1 2023-07-24 20:10:45,711 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/GrouptestMultiTableMoveA/a750a9c1125194728c194b99fb0ce8c1 empty. 2023-07-24 20:10:45,711 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/GrouptestMultiTableMoveA/a750a9c1125194728c194b99fb0ce8c1 2023-07-24 20:10:45,711 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-24 20:10:45,739 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-24 20:10:45,741 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => a750a9c1125194728c194b99fb0ce8c1, NAME => 'GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp 2023-07-24 20:10:45,769 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:45,769 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing a750a9c1125194728c194b99fb0ce8c1, disabling compactions & flushes 2023-07-24 20:10:45,769 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1. 2023-07-24 20:10:45,769 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1. 2023-07-24 20:10:45,769 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1. after waiting 0 ms 2023-07-24 20:10:45,769 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1. 2023-07-24 20:10:45,769 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1. 2023-07-24 20:10:45,769 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for a750a9c1125194728c194b99fb0ce8c1: 2023-07-24 20:10:45,773 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 20:10:45,774 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690229445774"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229445774"}]},"ts":"1690229445774"} 2023-07-24 20:10:45,776 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 20:10:45,777 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 20:10:45,777 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229445777"}]},"ts":"1690229445777"} 2023-07-24 20:10:45,779 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-24 20:10:45,782 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 20:10:45,782 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 20:10:45,782 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 20:10:45,782 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 20:10:45,782 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 20:10:45,783 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=a750a9c1125194728c194b99fb0ce8c1, ASSIGN}] 2023-07-24 20:10:45,788 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=a750a9c1125194728c194b99fb0ce8c1, ASSIGN 2023-07-24 20:10:45,790 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=a750a9c1125194728c194b99fb0ce8c1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45007,1690229432026; forceNewPlan=false, retain=false 2023-07-24 20:10:45,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-24 20:10:45,941 INFO [jenkins-hbase4:43005] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 20:10:45,942 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=89 updating hbase:meta row=a750a9c1125194728c194b99fb0ce8c1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:45,942 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690229445942"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229445942"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229445942"}]},"ts":"1690229445942"} 2023-07-24 20:10:45,944 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=90, ppid=89, state=RUNNABLE; OpenRegionProcedure a750a9c1125194728c194b99fb0ce8c1, server=jenkins-hbase4.apache.org,45007,1690229432026}] 2023-07-24 20:10:46,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-24 20:10:46,100 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1. 2023-07-24 20:10:46,100 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a750a9c1125194728c194b99fb0ce8c1, NAME => 'GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1.', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:10:46,100 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA a750a9c1125194728c194b99fb0ce8c1 2023-07-24 20:10:46,100 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:46,101 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a750a9c1125194728c194b99fb0ce8c1 2023-07-24 20:10:46,101 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a750a9c1125194728c194b99fb0ce8c1 2023-07-24 20:10:46,102 INFO [StoreOpener-a750a9c1125194728c194b99fb0ce8c1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a750a9c1125194728c194b99fb0ce8c1 2023-07-24 20:10:46,104 DEBUG [StoreOpener-a750a9c1125194728c194b99fb0ce8c1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/GrouptestMultiTableMoveA/a750a9c1125194728c194b99fb0ce8c1/f 2023-07-24 20:10:46,104 DEBUG [StoreOpener-a750a9c1125194728c194b99fb0ce8c1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/GrouptestMultiTableMoveA/a750a9c1125194728c194b99fb0ce8c1/f 2023-07-24 20:10:46,104 INFO [StoreOpener-a750a9c1125194728c194b99fb0ce8c1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a750a9c1125194728c194b99fb0ce8c1 columnFamilyName f 2023-07-24 20:10:46,105 INFO [StoreOpener-a750a9c1125194728c194b99fb0ce8c1-1] regionserver.HStore(310): Store=a750a9c1125194728c194b99fb0ce8c1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:46,106 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/GrouptestMultiTableMoveA/a750a9c1125194728c194b99fb0ce8c1 2023-07-24 20:10:46,106 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/GrouptestMultiTableMoveA/a750a9c1125194728c194b99fb0ce8c1 2023-07-24 20:10:46,109 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a750a9c1125194728c194b99fb0ce8c1 2023-07-24 20:10:46,111 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/GrouptestMultiTableMoveA/a750a9c1125194728c194b99fb0ce8c1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:10:46,111 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a750a9c1125194728c194b99fb0ce8c1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11382116640, jitterRate=0.06004221737384796}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:46,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a750a9c1125194728c194b99fb0ce8c1: 2023-07-24 20:10:46,112 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1., pid=90, masterSystemTime=1690229446095 2023-07-24 20:10:46,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1. 2023-07-24 20:10:46,114 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1. 2023-07-24 20:10:46,114 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=89 updating hbase:meta row=a750a9c1125194728c194b99fb0ce8c1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:46,114 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690229446114"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229446114"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229446114"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229446114"}]},"ts":"1690229446114"} 2023-07-24 20:10:46,117 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=90, resume processing ppid=89 2023-07-24 20:10:46,117 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=90, ppid=89, state=SUCCESS; OpenRegionProcedure a750a9c1125194728c194b99fb0ce8c1, server=jenkins-hbase4.apache.org,45007,1690229432026 in 172 msec 2023-07-24 20:10:46,119 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=88 2023-07-24 20:10:46,119 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=88, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=a750a9c1125194728c194b99fb0ce8c1, ASSIGN in 334 msec 2023-07-24 20:10:46,120 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 20:10:46,120 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229446120"}]},"ts":"1690229446120"} 2023-07-24 20:10:46,121 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-24 20:10:46,130 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 20:10:46,131 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=88, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 429 msec 2023-07-24 20:10:46,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-24 20:10:46,309 INFO [Listener at localhost/39801] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 88 completed 2023-07-24 20:10:46,309 DEBUG [Listener at localhost/39801] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-24 20:10:46,309 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:46,314 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-24 20:10:46,314 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:46,314 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-24 20:10:46,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 20:10:46,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=91, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 20:10:46,319 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 20:10:46,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 91 2023-07-24 20:10:46,325 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_42167359 2023-07-24 20:10:46,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-24 20:10:46,326 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:46,327 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:46,327 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:46,330 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 20:10:46,332 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/GrouptestMultiTableMoveB/09ee48d4678f86a7765ad9f36f74772c 2023-07-24 20:10:46,333 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/GrouptestMultiTableMoveB/09ee48d4678f86a7765ad9f36f74772c empty. 2023-07-24 20:10:46,333 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/GrouptestMultiTableMoveB/09ee48d4678f86a7765ad9f36f74772c 2023-07-24 20:10:46,334 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-24 20:10:46,359 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'GrouptestMultiTableMoveA' 2023-07-24 20:10:46,363 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-24 20:10:46,365 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 09ee48d4678f86a7765ad9f36f74772c, NAME => 'GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp 2023-07-24 20:10:46,385 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:46,385 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 09ee48d4678f86a7765ad9f36f74772c, disabling compactions & flushes 2023-07-24 20:10:46,385 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c. 2023-07-24 20:10:46,385 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c. 2023-07-24 20:10:46,385 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c. after waiting 0 ms 2023-07-24 20:10:46,385 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c. 2023-07-24 20:10:46,385 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c. 2023-07-24 20:10:46,385 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 09ee48d4678f86a7765ad9f36f74772c: 2023-07-24 20:10:46,388 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 20:10:46,389 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690229446389"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229446389"}]},"ts":"1690229446389"} 2023-07-24 20:10:46,391 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 20:10:46,392 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 20:10:46,392 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229446392"}]},"ts":"1690229446392"} 2023-07-24 20:10:46,393 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-24 20:10:46,396 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 20:10:46,396 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 20:10:46,396 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 20:10:46,396 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 20:10:46,396 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 20:10:46,396 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=09ee48d4678f86a7765ad9f36f74772c, ASSIGN}] 2023-07-24 20:10:46,398 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=09ee48d4678f86a7765ad9f36f74772c, ASSIGN 2023-07-24 20:10:46,399 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=09ee48d4678f86a7765ad9f36f74772c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36633,1690229435886; forceNewPlan=false, retain=false 2023-07-24 20:10:46,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-24 20:10:46,549 INFO [jenkins-hbase4:43005] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 20:10:46,551 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=09ee48d4678f86a7765ad9f36f74772c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:46,551 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690229446551"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229446551"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229446551"}]},"ts":"1690229446551"} 2023-07-24 20:10:46,553 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=93, ppid=92, state=RUNNABLE; OpenRegionProcedure 09ee48d4678f86a7765ad9f36f74772c, server=jenkins-hbase4.apache.org,36633,1690229435886}] 2023-07-24 20:10:46,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-24 20:10:46,709 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c. 2023-07-24 20:10:46,709 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 09ee48d4678f86a7765ad9f36f74772c, NAME => 'GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c.', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:10:46,710 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 09ee48d4678f86a7765ad9f36f74772c 2023-07-24 20:10:46,710 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:46,710 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 09ee48d4678f86a7765ad9f36f74772c 2023-07-24 20:10:46,710 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 09ee48d4678f86a7765ad9f36f74772c 2023-07-24 20:10:46,711 INFO [StoreOpener-09ee48d4678f86a7765ad9f36f74772c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 09ee48d4678f86a7765ad9f36f74772c 2023-07-24 20:10:46,713 DEBUG [StoreOpener-09ee48d4678f86a7765ad9f36f74772c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/GrouptestMultiTableMoveB/09ee48d4678f86a7765ad9f36f74772c/f 2023-07-24 20:10:46,713 DEBUG [StoreOpener-09ee48d4678f86a7765ad9f36f74772c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/GrouptestMultiTableMoveB/09ee48d4678f86a7765ad9f36f74772c/f 2023-07-24 20:10:46,714 INFO [StoreOpener-09ee48d4678f86a7765ad9f36f74772c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 09ee48d4678f86a7765ad9f36f74772c columnFamilyName f 2023-07-24 20:10:46,714 INFO [StoreOpener-09ee48d4678f86a7765ad9f36f74772c-1] regionserver.HStore(310): Store=09ee48d4678f86a7765ad9f36f74772c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:46,715 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/GrouptestMultiTableMoveB/09ee48d4678f86a7765ad9f36f74772c 2023-07-24 20:10:46,716 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/GrouptestMultiTableMoveB/09ee48d4678f86a7765ad9f36f74772c 2023-07-24 20:10:46,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 09ee48d4678f86a7765ad9f36f74772c 2023-07-24 20:10:46,720 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/GrouptestMultiTableMoveB/09ee48d4678f86a7765ad9f36f74772c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:10:46,721 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 09ee48d4678f86a7765ad9f36f74772c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10484318080, jitterRate=-0.023571789264678955}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:46,721 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 09ee48d4678f86a7765ad9f36f74772c: 2023-07-24 20:10:46,722 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c., pid=93, masterSystemTime=1690229446704 2023-07-24 20:10:46,723 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c. 2023-07-24 20:10:46,724 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c. 2023-07-24 20:10:46,724 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=09ee48d4678f86a7765ad9f36f74772c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:46,724 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690229446724"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229446724"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229446724"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229446724"}]},"ts":"1690229446724"} 2023-07-24 20:10:46,736 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=93, resume processing ppid=92 2023-07-24 20:10:46,736 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=93, ppid=92, state=SUCCESS; OpenRegionProcedure 09ee48d4678f86a7765ad9f36f74772c, server=jenkins-hbase4.apache.org,36633,1690229435886 in 181 msec 2023-07-24 20:10:46,738 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-24 20:10:46,738 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=09ee48d4678f86a7765ad9f36f74772c, ASSIGN in 340 msec 2023-07-24 20:10:46,739 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 20:10:46,739 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229446739"}]},"ts":"1690229446739"} 2023-07-24 20:10:46,740 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-24 20:10:46,744 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 20:10:46,745 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=91, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 428 msec 2023-07-24 20:10:46,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-24 20:10:46,930 INFO [Listener at localhost/39801] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 91 completed 2023-07-24 20:10:46,931 DEBUG [Listener at localhost/39801] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-24 20:10:46,931 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:46,935 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-24 20:10:46,936 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:46,936 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-24 20:10:46,936 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:46,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-24 20:10:46,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 20:10:46,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-24 20:10:46,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 20:10:46,956 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_42167359 2023-07-24 20:10:46,959 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_42167359 2023-07-24 20:10:46,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_42167359 2023-07-24 20:10:46,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:46,963 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:46,963 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:46,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_42167359 2023-07-24 20:10:46,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(345): Moving region 09ee48d4678f86a7765ad9f36f74772c to RSGroup Group_testMultiTableMove_42167359 2023-07-24 20:10:46,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=09ee48d4678f86a7765ad9f36f74772c, REOPEN/MOVE 2023-07-24 20:10:46,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_42167359 2023-07-24 20:10:46,968 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=94, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=09ee48d4678f86a7765ad9f36f74772c, REOPEN/MOVE 2023-07-24 20:10:46,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(345): Moving region a750a9c1125194728c194b99fb0ce8c1 to RSGroup Group_testMultiTableMove_42167359 2023-07-24 20:10:46,970 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=09ee48d4678f86a7765ad9f36f74772c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:46,970 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690229446969"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229446969"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229446969"}]},"ts":"1690229446969"} 2023-07-24 20:10:46,971 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=95, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=a750a9c1125194728c194b99fb0ce8c1, REOPEN/MOVE 2023-07-24 20:10:46,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_42167359, current retry=0 2023-07-24 20:10:46,972 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=a750a9c1125194728c194b99fb0ce8c1, REOPEN/MOVE 2023-07-24 20:10:46,975 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=94, state=RUNNABLE; CloseRegionProcedure 09ee48d4678f86a7765ad9f36f74772c, server=jenkins-hbase4.apache.org,36633,1690229435886}] 2023-07-24 20:10:46,976 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=a750a9c1125194728c194b99fb0ce8c1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:46,977 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690229446976"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229446976"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229446976"}]},"ts":"1690229446976"} 2023-07-24 20:10:46,979 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=97, ppid=95, state=RUNNABLE; CloseRegionProcedure a750a9c1125194728c194b99fb0ce8c1, server=jenkins-hbase4.apache.org,45007,1690229432026}] 2023-07-24 20:10:47,133 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 09ee48d4678f86a7765ad9f36f74772c 2023-07-24 20:10:47,133 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a750a9c1125194728c194b99fb0ce8c1 2023-07-24 20:10:47,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 09ee48d4678f86a7765ad9f36f74772c, disabling compactions & flushes 2023-07-24 20:10:47,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a750a9c1125194728c194b99fb0ce8c1, disabling compactions & flushes 2023-07-24 20:10:47,136 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c. 2023-07-24 20:10:47,136 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1. 2023-07-24 20:10:47,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c. 2023-07-24 20:10:47,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1. 2023-07-24 20:10:47,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c. after waiting 0 ms 2023-07-24 20:10:47,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1. after waiting 0 ms 2023-07-24 20:10:47,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c. 2023-07-24 20:10:47,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1. 2023-07-24 20:10:47,141 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/GrouptestMultiTableMoveA/a750a9c1125194728c194b99fb0ce8c1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 20:10:47,141 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/GrouptestMultiTableMoveB/09ee48d4678f86a7765ad9f36f74772c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 20:10:47,141 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1. 2023-07-24 20:10:47,141 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c. 2023-07-24 20:10:47,141 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a750a9c1125194728c194b99fb0ce8c1: 2023-07-24 20:10:47,141 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 09ee48d4678f86a7765ad9f36f74772c: 2023-07-24 20:10:47,142 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding a750a9c1125194728c194b99fb0ce8c1 move to jenkins-hbase4.apache.org,36167,1690229431901 record at close sequenceid=2 2023-07-24 20:10:47,142 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 09ee48d4678f86a7765ad9f36f74772c move to jenkins-hbase4.apache.org,36167,1690229431901 record at close sequenceid=2 2023-07-24 20:10:47,144 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 09ee48d4678f86a7765ad9f36f74772c 2023-07-24 20:10:47,144 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=09ee48d4678f86a7765ad9f36f74772c, regionState=CLOSED 2023-07-24 20:10:47,145 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690229447144"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229447144"}]},"ts":"1690229447144"} 2023-07-24 20:10:47,145 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a750a9c1125194728c194b99fb0ce8c1 2023-07-24 20:10:47,146 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=a750a9c1125194728c194b99fb0ce8c1, regionState=CLOSED 2023-07-24 20:10:47,146 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690229447146"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229447146"}]},"ts":"1690229447146"} 2023-07-24 20:10:47,148 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=94 2023-07-24 20:10:47,148 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=94, state=SUCCESS; CloseRegionProcedure 09ee48d4678f86a7765ad9f36f74772c, server=jenkins-hbase4.apache.org,36633,1690229435886 in 174 msec 2023-07-24 20:10:47,149 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=09ee48d4678f86a7765ad9f36f74772c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36167,1690229431901; forceNewPlan=false, retain=false 2023-07-24 20:10:47,149 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=97, resume processing ppid=95 2023-07-24 20:10:47,149 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=97, ppid=95, state=SUCCESS; CloseRegionProcedure a750a9c1125194728c194b99fb0ce8c1, server=jenkins-hbase4.apache.org,45007,1690229432026 in 168 msec 2023-07-24 20:10:47,150 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=95, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=a750a9c1125194728c194b99fb0ce8c1, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36167,1690229431901; forceNewPlan=false, retain=false 2023-07-24 20:10:47,300 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=a750a9c1125194728c194b99fb0ce8c1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:47,300 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=09ee48d4678f86a7765ad9f36f74772c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:47,300 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690229447300"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229447300"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229447300"}]},"ts":"1690229447300"} 2023-07-24 20:10:47,300 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690229447300"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229447300"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229447300"}]},"ts":"1690229447300"} 2023-07-24 20:10:47,302 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=95, state=RUNNABLE; OpenRegionProcedure a750a9c1125194728c194b99fb0ce8c1, server=jenkins-hbase4.apache.org,36167,1690229431901}] 2023-07-24 20:10:47,305 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=94, state=RUNNABLE; OpenRegionProcedure 09ee48d4678f86a7765ad9f36f74772c, server=jenkins-hbase4.apache.org,36167,1690229431901}] 2023-07-24 20:10:47,462 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1. 2023-07-24 20:10:47,463 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a750a9c1125194728c194b99fb0ce8c1, NAME => 'GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1.', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:10:47,463 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA a750a9c1125194728c194b99fb0ce8c1 2023-07-24 20:10:47,463 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:47,463 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a750a9c1125194728c194b99fb0ce8c1 2023-07-24 20:10:47,463 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a750a9c1125194728c194b99fb0ce8c1 2023-07-24 20:10:47,466 INFO [StoreOpener-a750a9c1125194728c194b99fb0ce8c1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a750a9c1125194728c194b99fb0ce8c1 2023-07-24 20:10:47,468 DEBUG [StoreOpener-a750a9c1125194728c194b99fb0ce8c1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/GrouptestMultiTableMoveA/a750a9c1125194728c194b99fb0ce8c1/f 2023-07-24 20:10:47,468 DEBUG [StoreOpener-a750a9c1125194728c194b99fb0ce8c1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/GrouptestMultiTableMoveA/a750a9c1125194728c194b99fb0ce8c1/f 2023-07-24 20:10:47,469 INFO [StoreOpener-a750a9c1125194728c194b99fb0ce8c1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a750a9c1125194728c194b99fb0ce8c1 columnFamilyName f 2023-07-24 20:10:47,474 INFO [StoreOpener-a750a9c1125194728c194b99fb0ce8c1-1] regionserver.HStore(310): Store=a750a9c1125194728c194b99fb0ce8c1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:47,476 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/GrouptestMultiTableMoveA/a750a9c1125194728c194b99fb0ce8c1 2023-07-24 20:10:47,480 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/GrouptestMultiTableMoveA/a750a9c1125194728c194b99fb0ce8c1 2023-07-24 20:10:47,484 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a750a9c1125194728c194b99fb0ce8c1 2023-07-24 20:10:47,485 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a750a9c1125194728c194b99fb0ce8c1; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10526709120, jitterRate=-0.01962381601333618}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:47,485 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a750a9c1125194728c194b99fb0ce8c1: 2023-07-24 20:10:47,486 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1., pid=98, masterSystemTime=1690229447457 2023-07-24 20:10:47,488 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1. 2023-07-24 20:10:47,488 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1. 2023-07-24 20:10:47,489 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c. 2023-07-24 20:10:47,489 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 09ee48d4678f86a7765ad9f36f74772c, NAME => 'GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c.', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:10:47,489 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=a750a9c1125194728c194b99fb0ce8c1, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:47,489 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 09ee48d4678f86a7765ad9f36f74772c 2023-07-24 20:10:47,489 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690229447489"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229447489"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229447489"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229447489"}]},"ts":"1690229447489"} 2023-07-24 20:10:47,489 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:47,489 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 09ee48d4678f86a7765ad9f36f74772c 2023-07-24 20:10:47,490 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 09ee48d4678f86a7765ad9f36f74772c 2023-07-24 20:10:47,493 INFO [StoreOpener-09ee48d4678f86a7765ad9f36f74772c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 09ee48d4678f86a7765ad9f36f74772c 2023-07-24 20:10:47,494 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=95 2023-07-24 20:10:47,494 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=95, state=SUCCESS; OpenRegionProcedure a750a9c1125194728c194b99fb0ce8c1, server=jenkins-hbase4.apache.org,36167,1690229431901 in 190 msec 2023-07-24 20:10:47,495 DEBUG [StoreOpener-09ee48d4678f86a7765ad9f36f74772c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/GrouptestMultiTableMoveB/09ee48d4678f86a7765ad9f36f74772c/f 2023-07-24 20:10:47,495 DEBUG [StoreOpener-09ee48d4678f86a7765ad9f36f74772c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/GrouptestMultiTableMoveB/09ee48d4678f86a7765ad9f36f74772c/f 2023-07-24 20:10:47,495 INFO [StoreOpener-09ee48d4678f86a7765ad9f36f74772c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 09ee48d4678f86a7765ad9f36f74772c columnFamilyName f 2023-07-24 20:10:47,496 INFO [StoreOpener-09ee48d4678f86a7765ad9f36f74772c-1] regionserver.HStore(310): Store=09ee48d4678f86a7765ad9f36f74772c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:47,497 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/GrouptestMultiTableMoveB/09ee48d4678f86a7765ad9f36f74772c 2023-07-24 20:10:47,499 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/GrouptestMultiTableMoveB/09ee48d4678f86a7765ad9f36f74772c 2023-07-24 20:10:47,500 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=95, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=a750a9c1125194728c194b99fb0ce8c1, REOPEN/MOVE in 525 msec 2023-07-24 20:10:47,503 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 09ee48d4678f86a7765ad9f36f74772c 2023-07-24 20:10:47,504 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 09ee48d4678f86a7765ad9f36f74772c; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11968678080, jitterRate=0.11467000842094421}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:47,504 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 09ee48d4678f86a7765ad9f36f74772c: 2023-07-24 20:10:47,505 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c., pid=99, masterSystemTime=1690229447457 2023-07-24 20:10:47,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c. 2023-07-24 20:10:47,508 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c. 2023-07-24 20:10:47,508 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=09ee48d4678f86a7765ad9f36f74772c, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:47,508 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690229447508"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229447508"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229447508"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229447508"}]},"ts":"1690229447508"} 2023-07-24 20:10:47,514 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=94 2023-07-24 20:10:47,514 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=94, state=SUCCESS; OpenRegionProcedure 09ee48d4678f86a7765ad9f36f74772c, server=jenkins-hbase4.apache.org,36167,1690229431901 in 205 msec 2023-07-24 20:10:47,516 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=09ee48d4678f86a7765ad9f36f74772c, REOPEN/MOVE in 549 msec 2023-07-24 20:10:47,971 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure.ProcedureSyncWait(216): waitFor pid=94 2023-07-24 20:10:47,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_42167359. 2023-07-24 20:10:47,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:47,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:47,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:47,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-24 20:10:47,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 20:10:47,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-24 20:10:47,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 20:10:47,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:10:47,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:47,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_42167359 2023-07-24 20:10:47,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:47,984 INFO [Listener at localhost/39801] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-24 20:10:47,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-24 20:10:47,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 20:10:47,989 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229447989"}]},"ts":"1690229447989"} 2023-07-24 20:10:47,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-24 20:10:47,991 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-24 20:10:47,992 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-24 20:10:47,993 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=a750a9c1125194728c194b99fb0ce8c1, UNASSIGN}] 2023-07-24 20:10:47,995 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=a750a9c1125194728c194b99fb0ce8c1, UNASSIGN 2023-07-24 20:10:47,995 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=a750a9c1125194728c194b99fb0ce8c1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:47,995 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690229447995"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229447995"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229447995"}]},"ts":"1690229447995"} 2023-07-24 20:10:47,997 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=101, state=RUNNABLE; CloseRegionProcedure a750a9c1125194728c194b99fb0ce8c1, server=jenkins-hbase4.apache.org,36167,1690229431901}] 2023-07-24 20:10:48,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-24 20:10:48,150 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a750a9c1125194728c194b99fb0ce8c1 2023-07-24 20:10:48,152 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a750a9c1125194728c194b99fb0ce8c1, disabling compactions & flushes 2023-07-24 20:10:48,152 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1. 2023-07-24 20:10:48,152 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1. 2023-07-24 20:10:48,152 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1. after waiting 0 ms 2023-07-24 20:10:48,152 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1. 2023-07-24 20:10:48,160 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/GrouptestMultiTableMoveA/a750a9c1125194728c194b99fb0ce8c1/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 20:10:48,160 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1. 2023-07-24 20:10:48,161 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a750a9c1125194728c194b99fb0ce8c1: 2023-07-24 20:10:48,162 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a750a9c1125194728c194b99fb0ce8c1 2023-07-24 20:10:48,164 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=a750a9c1125194728c194b99fb0ce8c1, regionState=CLOSED 2023-07-24 20:10:48,164 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690229448164"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229448164"}]},"ts":"1690229448164"} 2023-07-24 20:10:48,167 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=101 2023-07-24 20:10:48,167 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=101, state=SUCCESS; CloseRegionProcedure a750a9c1125194728c194b99fb0ce8c1, server=jenkins-hbase4.apache.org,36167,1690229431901 in 169 msec 2023-07-24 20:10:48,169 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=100 2023-07-24 20:10:48,169 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=a750a9c1125194728c194b99fb0ce8c1, UNASSIGN in 174 msec 2023-07-24 20:10:48,170 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229448170"}]},"ts":"1690229448170"} 2023-07-24 20:10:48,171 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-24 20:10:48,173 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-24 20:10:48,175 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 189 msec 2023-07-24 20:10:48,292 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-24 20:10:48,292 INFO [Listener at localhost/39801] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 100 completed 2023-07-24 20:10:48,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-24 20:10:48,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 20:10:48,296 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=103, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 20:10:48,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_42167359' 2023-07-24 20:10:48,297 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=103, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 20:10:48,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_42167359 2023-07-24 20:10:48,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:48,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:48,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:48,301 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/GrouptestMultiTableMoveA/a750a9c1125194728c194b99fb0ce8c1 2023-07-24 20:10:48,304 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/GrouptestMultiTableMoveA/a750a9c1125194728c194b99fb0ce8c1/f, FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/GrouptestMultiTableMoveA/a750a9c1125194728c194b99fb0ce8c1/recovered.edits] 2023-07-24 20:10:48,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-24 20:10:48,310 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/GrouptestMultiTableMoveA/a750a9c1125194728c194b99fb0ce8c1/recovered.edits/7.seqid to hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/archive/data/default/GrouptestMultiTableMoveA/a750a9c1125194728c194b99fb0ce8c1/recovered.edits/7.seqid 2023-07-24 20:10:48,310 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/GrouptestMultiTableMoveA/a750a9c1125194728c194b99fb0ce8c1 2023-07-24 20:10:48,310 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-24 20:10:48,313 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=103, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 20:10:48,315 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-24 20:10:48,317 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-24 20:10:48,318 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=103, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 20:10:48,318 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-24 20:10:48,318 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690229448318"}]},"ts":"9223372036854775807"} 2023-07-24 20:10:48,320 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 20:10:48,320 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => a750a9c1125194728c194b99fb0ce8c1, NAME => 'GrouptestMultiTableMoveA,,1690229445700.a750a9c1125194728c194b99fb0ce8c1.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 20:10:48,320 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-24 20:10:48,320 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690229448320"}]},"ts":"9223372036854775807"} 2023-07-24 20:10:48,321 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-24 20:10:48,323 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=103, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 20:10:48,324 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 30 msec 2023-07-24 20:10:48,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-24 20:10:48,405 INFO [Listener at localhost/39801] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 103 completed 2023-07-24 20:10:48,406 INFO [Listener at localhost/39801] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-24 20:10:48,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-24 20:10:48,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=104, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 20:10:48,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-24 20:10:48,410 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229448410"}]},"ts":"1690229448410"} 2023-07-24 20:10:48,412 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-24 20:10:48,413 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-24 20:10:48,414 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=09ee48d4678f86a7765ad9f36f74772c, UNASSIGN}] 2023-07-24 20:10:48,416 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=105, ppid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=09ee48d4678f86a7765ad9f36f74772c, UNASSIGN 2023-07-24 20:10:48,417 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=105 updating hbase:meta row=09ee48d4678f86a7765ad9f36f74772c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:48,417 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690229448417"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229448417"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229448417"}]},"ts":"1690229448417"} 2023-07-24 20:10:48,418 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=106, ppid=105, state=RUNNABLE; CloseRegionProcedure 09ee48d4678f86a7765ad9f36f74772c, server=jenkins-hbase4.apache.org,36167,1690229431901}] 2023-07-24 20:10:48,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-24 20:10:48,570 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 09ee48d4678f86a7765ad9f36f74772c 2023-07-24 20:10:48,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 09ee48d4678f86a7765ad9f36f74772c, disabling compactions & flushes 2023-07-24 20:10:48,571 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c. 2023-07-24 20:10:48,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c. 2023-07-24 20:10:48,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c. after waiting 0 ms 2023-07-24 20:10:48,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c. 2023-07-24 20:10:48,575 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/GrouptestMultiTableMoveB/09ee48d4678f86a7765ad9f36f74772c/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 20:10:48,576 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c. 2023-07-24 20:10:48,576 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 09ee48d4678f86a7765ad9f36f74772c: 2023-07-24 20:10:48,577 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 09ee48d4678f86a7765ad9f36f74772c 2023-07-24 20:10:48,578 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=105 updating hbase:meta row=09ee48d4678f86a7765ad9f36f74772c, regionState=CLOSED 2023-07-24 20:10:48,578 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690229448578"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229448578"}]},"ts":"1690229448578"} 2023-07-24 20:10:48,581 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=106, resume processing ppid=105 2023-07-24 20:10:48,581 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=106, ppid=105, state=SUCCESS; CloseRegionProcedure 09ee48d4678f86a7765ad9f36f74772c, server=jenkins-hbase4.apache.org,36167,1690229431901 in 161 msec 2023-07-24 20:10:48,582 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=104 2023-07-24 20:10:48,582 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=104, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=09ee48d4678f86a7765ad9f36f74772c, UNASSIGN in 167 msec 2023-07-24 20:10:48,583 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229448583"}]},"ts":"1690229448583"} 2023-07-24 20:10:48,584 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-24 20:10:48,586 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-24 20:10:48,587 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=104, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 179 msec 2023-07-24 20:10:48,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-24 20:10:48,713 INFO [Listener at localhost/39801] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 104 completed 2023-07-24 20:10:48,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-24 20:10:48,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=107, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 20:10:48,716 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=107, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 20:10:48,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_42167359' 2023-07-24 20:10:48,717 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=107, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 20:10:48,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_42167359 2023-07-24 20:10:48,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:48,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:48,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:48,721 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/GrouptestMultiTableMoveB/09ee48d4678f86a7765ad9f36f74772c 2023-07-24 20:10:48,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-24 20:10:48,723 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/GrouptestMultiTableMoveB/09ee48d4678f86a7765ad9f36f74772c/f, FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/GrouptestMultiTableMoveB/09ee48d4678f86a7765ad9f36f74772c/recovered.edits] 2023-07-24 20:10:48,729 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/GrouptestMultiTableMoveB/09ee48d4678f86a7765ad9f36f74772c/recovered.edits/7.seqid to hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/archive/data/default/GrouptestMultiTableMoveB/09ee48d4678f86a7765ad9f36f74772c/recovered.edits/7.seqid 2023-07-24 20:10:48,729 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/GrouptestMultiTableMoveB/09ee48d4678f86a7765ad9f36f74772c 2023-07-24 20:10:48,729 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-24 20:10:48,732 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=107, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 20:10:48,734 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-24 20:10:48,735 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-24 20:10:48,737 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=107, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 20:10:48,737 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-24 20:10:48,737 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690229448737"}]},"ts":"9223372036854775807"} 2023-07-24 20:10:48,741 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 20:10:48,741 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 09ee48d4678f86a7765ad9f36f74772c, NAME => 'GrouptestMultiTableMoveB,,1690229446316.09ee48d4678f86a7765ad9f36f74772c.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 20:10:48,741 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-24 20:10:48,741 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690229448741"}]},"ts":"9223372036854775807"} 2023-07-24 20:10:48,743 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-24 20:10:48,749 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=107, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 20:10:48,750 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=107, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 35 msec 2023-07-24 20:10:48,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-24 20:10:48,824 INFO [Listener at localhost/39801] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 107 completed 2023-07-24 20:10:48,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:48,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:48,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:10:48,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:10:48,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:48,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36167] to rsgroup default 2023-07-24 20:10:48,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_42167359 2023-07-24 20:10:48,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:48,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:48,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:48,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_42167359, current retry=0 2023-07-24 20:10:48,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36167,1690229431901] are moved back to Group_testMultiTableMove_42167359 2023-07-24 20:10:48,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_42167359 => default 2023-07-24 20:10:48,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:48,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_42167359 2023-07-24 20:10:48,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:48,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:48,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 20:10:48,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:10:48,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:10:48,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:10:48,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:48,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 20:10:48,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:48,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 20:10:48,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:48,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 20:10:48,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:10:48,851 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 20:10:48,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 20:10:48,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:48,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:48,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:10:48,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:10:48,859 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:48,859 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:48,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43005] to rsgroup master 2023-07-24 20:10:48,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:48,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 509 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60832 deadline: 1690230648860, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. 2023-07-24 20:10:48,861 WARN [Listener at localhost/39801] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 20:10:48,863 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:48,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:48,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:48,864 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:36633, jenkins-hbase4.apache.org:43675, jenkins-hbase4.apache.org:45007], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 20:10:48,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:10:48,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:48,882 INFO [Listener at localhost/39801] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=495 (was 497), OpenFileDescriptor=761 (was 762), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=394 (was 394), ProcessCount=176 (was 176), AvailableMemoryMB=5904 (was 5955) 2023-07-24 20:10:48,898 INFO [Listener at localhost/39801] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=495, OpenFileDescriptor=761, MaxFileDescriptor=60000, SystemLoadAverage=394, ProcessCount=176, AvailableMemoryMB=5903 2023-07-24 20:10:48,898 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-24 20:10:48,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:48,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:48,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:10:48,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:10:48,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:48,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 20:10:48,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:48,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 20:10:48,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:48,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 20:10:48,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:10:48,914 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 20:10:48,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 20:10:48,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:48,917 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:48,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:10:48,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:10:48,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:48,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:48,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43005] to rsgroup master 2023-07-24 20:10:48,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:48,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 537 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60832 deadline: 1690230648925, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. 2023-07-24 20:10:48,926 WARN [Listener at localhost/39801] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 20:10:48,927 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:48,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:48,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:48,929 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:36633, jenkins-hbase4.apache.org:43675, jenkins-hbase4.apache.org:45007], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 20:10:48,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:10:48,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:48,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:10:48,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:48,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-24 20:10:48,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:48,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 20:10:48,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:48,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:48,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:10:48,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:48,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:48,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:36633] to rsgroup oldGroup 2023-07-24 20:10:48,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:48,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 20:10:48,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:48,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:48,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 20:10:48,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36167,1690229431901, jenkins-hbase4.apache.org,36633,1690229435886] are moved back to default 2023-07-24 20:10:48,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-24 20:10:48,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:48,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:48,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:48,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-24 20:10:48,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:48,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-24 20:10:48,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:48,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:10:48,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:48,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-24 20:10:48,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:48,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-24 20:10:48,976 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 20:10:48,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:48,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 20:10:48,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:10:48,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:48,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:48,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43675] to rsgroup anotherRSGroup 2023-07-24 20:10:49,001 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:49,002 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-24 20:10:49,002 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 20:10:49,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:49,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 20:10:49,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 20:10:49,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43675,1690229432113] are moved back to default 2023-07-24 20:10:49,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-24 20:10:49,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:49,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:49,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:49,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-24 20:10:49,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:49,030 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-24 20:10:49,030 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:49,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-24 20:10:49,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:49,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 571 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:60832 deadline: 1690230649217, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-24 20:10:49,220 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-24 20:10:49,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:49,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 573 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:60832 deadline: 1690230649220, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-24 20:10:49,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-24 20:10:49,222 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:49,222 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 575 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:60832 deadline: 1690230649222, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-24 20:10:49,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-24 20:10:49,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:49,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 577 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:60832 deadline: 1690230649224, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-24 20:10:49,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:49,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:49,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:10:49,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:10:49,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:49,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43675] to rsgroup default 2023-07-24 20:10:49,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:49,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-24 20:10:49,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 20:10:49,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:49,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 20:10:49,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-24 20:10:49,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43675,1690229432113] are moved back to anotherRSGroup 2023-07-24 20:10:49,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-24 20:10:49,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:49,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-24 20:10:49,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:49,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 20:10:49,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:49,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-24 20:10:49,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:10:49,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:10:49,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:10:49,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:49,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:36633] to rsgroup default 2023-07-24 20:10:49,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:49,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 20:10:49,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:49,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:49,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-24 20:10:49,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36167,1690229431901, jenkins-hbase4.apache.org,36633,1690229435886] are moved back to oldGroup 2023-07-24 20:10:49,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-24 20:10:49,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:49,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-24 20:10:49,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:49,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:49,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 20:10:49,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:10:49,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:10:49,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:10:49,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:49,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 20:10:49,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:49,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 20:10:49,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:49,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 20:10:49,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:10:49,274 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 20:10:49,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 20:10:49,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:49,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:49,280 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:10:49,281 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:10:49,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:49,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:49,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43005] to rsgroup master 2023-07-24 20:10:49,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:49,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 613 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60832 deadline: 1690230649287, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. 2023-07-24 20:10:49,287 WARN [Listener at localhost/39801] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 20:10:49,289 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:49,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:49,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:49,290 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:36633, jenkins-hbase4.apache.org:43675, jenkins-hbase4.apache.org:45007], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 20:10:49,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:10:49,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:49,309 INFO [Listener at localhost/39801] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=499 (was 495) Potentially hanging thread: hconnection-0x3f04a498-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3f04a498-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3f04a498-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3f04a498-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=740 (was 761), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=426 (was 394) - SystemLoadAverage LEAK? -, ProcessCount=176 (was 176), AvailableMemoryMB=5841 (was 5903) 2023-07-24 20:10:49,326 INFO [Listener at localhost/39801] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=499, OpenFileDescriptor=740, MaxFileDescriptor=60000, SystemLoadAverage=426, ProcessCount=176, AvailableMemoryMB=5841 2023-07-24 20:10:49,326 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-24 20:10:49,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:49,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:49,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:10:49,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:10:49,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:49,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 20:10:49,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:49,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 20:10:49,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:49,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 20:10:49,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:10:49,339 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 20:10:49,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 20:10:49,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:49,342 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:49,343 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:10:49,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:10:49,348 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:49,348 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:49,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43005] to rsgroup master 2023-07-24 20:10:49,350 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:49,350 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 641 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60832 deadline: 1690230649350, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. 2023-07-24 20:10:49,351 WARN [Listener at localhost/39801] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 20:10:49,352 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:49,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:49,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:49,353 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:36633, jenkins-hbase4.apache.org:43675, jenkins-hbase4.apache.org:45007], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 20:10:49,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:10:49,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:49,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:10:49,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:49,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-24 20:10:49,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 20:10:49,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:49,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:49,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:49,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:10:49,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:49,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:49,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:36633] to rsgroup oldgroup 2023-07-24 20:10:49,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 20:10:49,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:49,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:49,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:49,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 20:10:49,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36167,1690229431901, jenkins-hbase4.apache.org,36633,1690229435886] are moved back to default 2023-07-24 20:10:49,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-24 20:10:49,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:49,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:49,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:49,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-24 20:10:49,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:49,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 20:10:49,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=108, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-24 20:10:49,388 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 20:10:49,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 108 2023-07-24 20:10:49,389 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=108 2023-07-24 20:10:49,391 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 20:10:49,391 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:49,392 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:49,392 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:49,395 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 20:10:49,397 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/testRename/fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:49,397 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/testRename/fb389637a88cedef2b8738060d31ce38 empty. 2023-07-24 20:10:49,398 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/testRename/fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:49,398 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-24 20:10:49,464 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-24 20:10:49,467 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => fb389637a88cedef2b8738060d31ce38, NAME => 'testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp 2023-07-24 20:10:49,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=108 2023-07-24 20:10:49,502 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:49,503 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing fb389637a88cedef2b8738060d31ce38, disabling compactions & flushes 2023-07-24 20:10:49,503 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. 2023-07-24 20:10:49,503 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. 2023-07-24 20:10:49,503 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. after waiting 0 ms 2023-07-24 20:10:49,503 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. 2023-07-24 20:10:49,503 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. 2023-07-24 20:10:49,503 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for fb389637a88cedef2b8738060d31ce38: 2023-07-24 20:10:49,507 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 20:10:49,508 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690229449508"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229449508"}]},"ts":"1690229449508"} 2023-07-24 20:10:49,510 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 20:10:49,511 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 20:10:49,511 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229449511"}]},"ts":"1690229449511"} 2023-07-24 20:10:49,513 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-24 20:10:49,518 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 20:10:49,518 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 20:10:49,519 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 20:10:49,519 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 20:10:49,523 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=109, ppid=108, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=fb389637a88cedef2b8738060d31ce38, ASSIGN}] 2023-07-24 20:10:49,529 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=109, ppid=108, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=fb389637a88cedef2b8738060d31ce38, ASSIGN 2023-07-24 20:10:49,530 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=109, ppid=108, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=fb389637a88cedef2b8738060d31ce38, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45007,1690229432026; forceNewPlan=false, retain=false 2023-07-24 20:10:49,680 INFO [jenkins-hbase4:43005] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 20:10:49,681 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=109 updating hbase:meta row=fb389637a88cedef2b8738060d31ce38, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:49,682 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690229449681"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229449681"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229449681"}]},"ts":"1690229449681"} 2023-07-24 20:10:49,683 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=110, ppid=109, state=RUNNABLE; OpenRegionProcedure fb389637a88cedef2b8738060d31ce38, server=jenkins-hbase4.apache.org,45007,1690229432026}] 2023-07-24 20:10:49,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=108 2023-07-24 20:10:49,841 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. 2023-07-24 20:10:49,841 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fb389637a88cedef2b8738060d31ce38, NAME => 'testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38.', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:10:49,841 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:49,841 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:49,841 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:49,841 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:49,843 INFO [StoreOpener-fb389637a88cedef2b8738060d31ce38-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:49,845 DEBUG [StoreOpener-fb389637a88cedef2b8738060d31ce38-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/testRename/fb389637a88cedef2b8738060d31ce38/tr 2023-07-24 20:10:49,845 DEBUG [StoreOpener-fb389637a88cedef2b8738060d31ce38-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/testRename/fb389637a88cedef2b8738060d31ce38/tr 2023-07-24 20:10:49,846 INFO [StoreOpener-fb389637a88cedef2b8738060d31ce38-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fb389637a88cedef2b8738060d31ce38 columnFamilyName tr 2023-07-24 20:10:49,847 INFO [StoreOpener-fb389637a88cedef2b8738060d31ce38-1] regionserver.HStore(310): Store=fb389637a88cedef2b8738060d31ce38/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:49,850 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/testRename/fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:49,850 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/testRename/fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:49,854 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:49,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/testRename/fb389637a88cedef2b8738060d31ce38/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:10:49,858 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fb389637a88cedef2b8738060d31ce38; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9672297920, jitterRate=-0.09919705986976624}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:49,858 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fb389637a88cedef2b8738060d31ce38: 2023-07-24 20:10:49,860 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38., pid=110, masterSystemTime=1690229449835 2023-07-24 20:10:49,862 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. 2023-07-24 20:10:49,862 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. 2023-07-24 20:10:49,863 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=109 updating hbase:meta row=fb389637a88cedef2b8738060d31ce38, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:49,863 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690229449863"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229449863"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229449863"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229449863"}]},"ts":"1690229449863"} 2023-07-24 20:10:49,876 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=110, resume processing ppid=109 2023-07-24 20:10:49,876 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=110, ppid=109, state=SUCCESS; OpenRegionProcedure fb389637a88cedef2b8738060d31ce38, server=jenkins-hbase4.apache.org,45007,1690229432026 in 190 msec 2023-07-24 20:10:49,878 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=109, resume processing ppid=108 2023-07-24 20:10:49,878 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=109, ppid=108, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=fb389637a88cedef2b8738060d31ce38, ASSIGN in 357 msec 2023-07-24 20:10:49,879 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 20:10:49,879 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229449879"}]},"ts":"1690229449879"} 2023-07-24 20:10:49,880 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-24 20:10:49,883 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 20:10:49,884 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=108, state=SUCCESS; CreateTableProcedure table=testRename in 499 msec 2023-07-24 20:10:50,000 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=108 2023-07-24 20:10:50,001 INFO [Listener at localhost/39801] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 108 completed 2023-07-24 20:10:50,001 DEBUG [Listener at localhost/39801] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-24 20:10:50,001 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:50,005 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-24 20:10:50,005 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:50,006 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-24 20:10:50,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-24 20:10:50,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 20:10:50,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:50,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:50,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:50,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-24 20:10:50,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(345): Moving region fb389637a88cedef2b8738060d31ce38 to RSGroup oldgroup 2023-07-24 20:10:50,014 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 20:10:50,014 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 20:10:50,014 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 20:10:50,014 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 20:10:50,014 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 20:10:50,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=111, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=fb389637a88cedef2b8738060d31ce38, REOPEN/MOVE 2023-07-24 20:10:50,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-24 20:10:50,018 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=fb389637a88cedef2b8738060d31ce38, REOPEN/MOVE 2023-07-24 20:10:50,019 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=fb389637a88cedef2b8738060d31ce38, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:50,019 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690229450019"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229450019"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229450019"}]},"ts":"1690229450019"} 2023-07-24 20:10:50,021 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; CloseRegionProcedure fb389637a88cedef2b8738060d31ce38, server=jenkins-hbase4.apache.org,45007,1690229432026}] 2023-07-24 20:10:50,174 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:50,175 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fb389637a88cedef2b8738060d31ce38, disabling compactions & flushes 2023-07-24 20:10:50,176 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. 2023-07-24 20:10:50,176 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. 2023-07-24 20:10:50,176 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. after waiting 0 ms 2023-07-24 20:10:50,176 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. 2023-07-24 20:10:50,186 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/testRename/fb389637a88cedef2b8738060d31ce38/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 20:10:50,187 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. 2023-07-24 20:10:50,187 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fb389637a88cedef2b8738060d31ce38: 2023-07-24 20:10:50,187 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding fb389637a88cedef2b8738060d31ce38 move to jenkins-hbase4.apache.org,36633,1690229435886 record at close sequenceid=2 2023-07-24 20:10:50,190 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:50,190 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=fb389637a88cedef2b8738060d31ce38, regionState=CLOSED 2023-07-24 20:10:50,191 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690229450190"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229450190"}]},"ts":"1690229450190"} 2023-07-24 20:10:50,203 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-24 20:10:50,203 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; CloseRegionProcedure fb389637a88cedef2b8738060d31ce38, server=jenkins-hbase4.apache.org,45007,1690229432026 in 178 msec 2023-07-24 20:10:50,205 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=fb389637a88cedef2b8738060d31ce38, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36633,1690229435886; forceNewPlan=false, retain=false 2023-07-24 20:10:50,357 INFO [jenkins-hbase4:43005] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 20:10:50,357 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=fb389637a88cedef2b8738060d31ce38, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:50,357 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690229450357"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229450357"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229450357"}]},"ts":"1690229450357"} 2023-07-24 20:10:50,359 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=113, ppid=111, state=RUNNABLE; OpenRegionProcedure fb389637a88cedef2b8738060d31ce38, server=jenkins-hbase4.apache.org,36633,1690229435886}] 2023-07-24 20:10:50,513 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. 2023-07-24 20:10:50,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fb389637a88cedef2b8738060d31ce38, NAME => 'testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38.', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:10:50,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:50,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:50,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:50,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:50,515 INFO [StoreOpener-fb389637a88cedef2b8738060d31ce38-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:50,516 DEBUG [StoreOpener-fb389637a88cedef2b8738060d31ce38-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/testRename/fb389637a88cedef2b8738060d31ce38/tr 2023-07-24 20:10:50,517 DEBUG [StoreOpener-fb389637a88cedef2b8738060d31ce38-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/testRename/fb389637a88cedef2b8738060d31ce38/tr 2023-07-24 20:10:50,517 INFO [StoreOpener-fb389637a88cedef2b8738060d31ce38-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fb389637a88cedef2b8738060d31ce38 columnFamilyName tr 2023-07-24 20:10:50,517 INFO [StoreOpener-fb389637a88cedef2b8738060d31ce38-1] regionserver.HStore(310): Store=fb389637a88cedef2b8738060d31ce38/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:50,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/testRename/fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:50,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/testRename/fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:50,522 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:50,523 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fb389637a88cedef2b8738060d31ce38; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10025110400, jitterRate=-0.06633883714675903}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:50,523 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fb389637a88cedef2b8738060d31ce38: 2023-07-24 20:10:50,524 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38., pid=113, masterSystemTime=1690229450510 2023-07-24 20:10:50,525 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. 2023-07-24 20:10:50,525 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. 2023-07-24 20:10:50,526 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=fb389637a88cedef2b8738060d31ce38, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:50,526 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690229450526"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229450526"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229450526"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229450526"}]},"ts":"1690229450526"} 2023-07-24 20:10:50,528 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=113, resume processing ppid=111 2023-07-24 20:10:50,528 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=113, ppid=111, state=SUCCESS; OpenRegionProcedure fb389637a88cedef2b8738060d31ce38, server=jenkins-hbase4.apache.org,36633,1690229435886 in 168 msec 2023-07-24 20:10:50,529 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=111, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=fb389637a88cedef2b8738060d31ce38, REOPEN/MOVE in 513 msec 2023-07-24 20:10:50,797 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 20:10:51,018 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure.ProcedureSyncWait(216): waitFor pid=111 2023-07-24 20:10:51,018 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-24 20:10:51,018 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:51,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:51,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:51,024 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:51,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-24 20:10:51,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 20:10:51,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-24 20:10:51,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:51,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-24 20:10:51,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 20:10:51,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:10:51,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:51,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-24 20:10:51,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 20:10:51,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 20:10:51,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:51,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:51,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 20:10:51,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:10:51,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:51,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:51,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43675] to rsgroup normal 2023-07-24 20:10:51,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 20:10:51,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 20:10:51,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:51,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:51,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 20:10:51,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 20:10:51,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43675,1690229432113] are moved back to default 2023-07-24 20:10:51,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-24 20:10:51,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:51,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:51,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:51,051 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-24 20:10:51,051 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:51,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 20:10:51,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-24 20:10:51,056 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 20:10:51,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 114 2023-07-24 20:10:51,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-24 20:10:51,059 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 20:10:51,059 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 20:10:51,060 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:51,060 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:51,061 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 20:10:51,066 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 20:10:51,068 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/unmovedTable/035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:51,068 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/unmovedTable/035aaee8d7dd517f8d7815de60a3d0d1 empty. 2023-07-24 20:10:51,069 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/unmovedTable/035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:51,069 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-24 20:10:51,093 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-24 20:10:51,095 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 035aaee8d7dd517f8d7815de60a3d0d1, NAME => 'unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp 2023-07-24 20:10:51,128 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:51,128 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 035aaee8d7dd517f8d7815de60a3d0d1, disabling compactions & flushes 2023-07-24 20:10:51,128 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. 2023-07-24 20:10:51,128 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. 2023-07-24 20:10:51,128 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. after waiting 0 ms 2023-07-24 20:10:51,128 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. 2023-07-24 20:10:51,128 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. 2023-07-24 20:10:51,128 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 035aaee8d7dd517f8d7815de60a3d0d1: 2023-07-24 20:10:51,131 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 20:10:51,135 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690229451135"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229451135"}]},"ts":"1690229451135"} 2023-07-24 20:10:51,144 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 20:10:51,145 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 20:10:51,145 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229451145"}]},"ts":"1690229451145"} 2023-07-24 20:10:51,147 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-24 20:10:51,152 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=035aaee8d7dd517f8d7815de60a3d0d1, ASSIGN}] 2023-07-24 20:10:51,155 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=035aaee8d7dd517f8d7815de60a3d0d1, ASSIGN 2023-07-24 20:10:51,156 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=035aaee8d7dd517f8d7815de60a3d0d1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45007,1690229432026; forceNewPlan=false, retain=false 2023-07-24 20:10:51,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-24 20:10:51,307 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=035aaee8d7dd517f8d7815de60a3d0d1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:51,307 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690229451307"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229451307"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229451307"}]},"ts":"1690229451307"} 2023-07-24 20:10:51,309 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=115, state=RUNNABLE; OpenRegionProcedure 035aaee8d7dd517f8d7815de60a3d0d1, server=jenkins-hbase4.apache.org,45007,1690229432026}] 2023-07-24 20:10:51,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-24 20:10:51,466 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. 2023-07-24 20:10:51,466 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 035aaee8d7dd517f8d7815de60a3d0d1, NAME => 'unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1.', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:10:51,467 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:51,467 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:51,467 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:51,467 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:51,471 INFO [StoreOpener-035aaee8d7dd517f8d7815de60a3d0d1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:51,472 DEBUG [StoreOpener-035aaee8d7dd517f8d7815de60a3d0d1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/unmovedTable/035aaee8d7dd517f8d7815de60a3d0d1/ut 2023-07-24 20:10:51,472 DEBUG [StoreOpener-035aaee8d7dd517f8d7815de60a3d0d1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/unmovedTable/035aaee8d7dd517f8d7815de60a3d0d1/ut 2023-07-24 20:10:51,473 INFO [StoreOpener-035aaee8d7dd517f8d7815de60a3d0d1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 035aaee8d7dd517f8d7815de60a3d0d1 columnFamilyName ut 2023-07-24 20:10:51,473 INFO [StoreOpener-035aaee8d7dd517f8d7815de60a3d0d1-1] regionserver.HStore(310): Store=035aaee8d7dd517f8d7815de60a3d0d1/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:51,474 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/unmovedTable/035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:51,474 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/unmovedTable/035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:51,478 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:51,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/unmovedTable/035aaee8d7dd517f8d7815de60a3d0d1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:10:51,481 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 035aaee8d7dd517f8d7815de60a3d0d1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10403389120, jitterRate=-0.031108886003494263}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:51,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 035aaee8d7dd517f8d7815de60a3d0d1: 2023-07-24 20:10:51,483 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1., pid=116, masterSystemTime=1690229451461 2023-07-24 20:10:51,486 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. 2023-07-24 20:10:51,486 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=035aaee8d7dd517f8d7815de60a3d0d1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:51,486 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. 2023-07-24 20:10:51,486 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690229451486"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229451486"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229451486"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229451486"}]},"ts":"1690229451486"} 2023-07-24 20:10:51,490 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=115 2023-07-24 20:10:51,490 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=115, state=SUCCESS; OpenRegionProcedure 035aaee8d7dd517f8d7815de60a3d0d1, server=jenkins-hbase4.apache.org,45007,1690229432026 in 179 msec 2023-07-24 20:10:51,493 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-24 20:10:51,493 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=035aaee8d7dd517f8d7815de60a3d0d1, ASSIGN in 339 msec 2023-07-24 20:10:51,494 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 20:10:51,494 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229451494"}]},"ts":"1690229451494"} 2023-07-24 20:10:51,500 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-24 20:10:51,504 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 20:10:51,505 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; CreateTableProcedure table=unmovedTable in 451 msec 2023-07-24 20:10:51,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-24 20:10:51,661 INFO [Listener at localhost/39801] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 114 completed 2023-07-24 20:10:51,661 DEBUG [Listener at localhost/39801] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-24 20:10:51,661 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:51,665 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-24 20:10:51,665 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:51,665 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-24 20:10:51,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-24 20:10:51,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 20:10:51,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 20:10:51,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:51,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:51,672 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 20:10:51,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-24 20:10:51,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(345): Moving region 035aaee8d7dd517f8d7815de60a3d0d1 to RSGroup normal 2023-07-24 20:10:51,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=035aaee8d7dd517f8d7815de60a3d0d1, REOPEN/MOVE 2023-07-24 20:10:51,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-24 20:10:51,675 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=035aaee8d7dd517f8d7815de60a3d0d1, REOPEN/MOVE 2023-07-24 20:10:51,675 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=035aaee8d7dd517f8d7815de60a3d0d1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:51,675 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690229451675"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229451675"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229451675"}]},"ts":"1690229451675"} 2023-07-24 20:10:51,677 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure 035aaee8d7dd517f8d7815de60a3d0d1, server=jenkins-hbase4.apache.org,45007,1690229432026}] 2023-07-24 20:10:51,830 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:51,831 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 035aaee8d7dd517f8d7815de60a3d0d1, disabling compactions & flushes 2023-07-24 20:10:51,831 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. 2023-07-24 20:10:51,831 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. 2023-07-24 20:10:51,831 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. after waiting 0 ms 2023-07-24 20:10:51,831 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. 2023-07-24 20:10:51,835 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/unmovedTable/035aaee8d7dd517f8d7815de60a3d0d1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 20:10:51,836 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. 2023-07-24 20:10:51,836 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 035aaee8d7dd517f8d7815de60a3d0d1: 2023-07-24 20:10:51,836 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 035aaee8d7dd517f8d7815de60a3d0d1 move to jenkins-hbase4.apache.org,43675,1690229432113 record at close sequenceid=2 2023-07-24 20:10:51,837 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:51,837 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=035aaee8d7dd517f8d7815de60a3d0d1, regionState=CLOSED 2023-07-24 20:10:51,838 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690229451837"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229451837"}]},"ts":"1690229451837"} 2023-07-24 20:10:51,840 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-24 20:10:51,840 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure 035aaee8d7dd517f8d7815de60a3d0d1, server=jenkins-hbase4.apache.org,45007,1690229432026 in 162 msec 2023-07-24 20:10:51,841 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=035aaee8d7dd517f8d7815de60a3d0d1, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43675,1690229432113; forceNewPlan=false, retain=false 2023-07-24 20:10:51,991 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=035aaee8d7dd517f8d7815de60a3d0d1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:51,991 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690229451991"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229451991"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229451991"}]},"ts":"1690229451991"} 2023-07-24 20:10:51,993 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=117, state=RUNNABLE; OpenRegionProcedure 035aaee8d7dd517f8d7815de60a3d0d1, server=jenkins-hbase4.apache.org,43675,1690229432113}] 2023-07-24 20:10:52,152 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. 2023-07-24 20:10:52,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 035aaee8d7dd517f8d7815de60a3d0d1, NAME => 'unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1.', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:10:52,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:52,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:52,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:52,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:52,154 INFO [StoreOpener-035aaee8d7dd517f8d7815de60a3d0d1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:52,155 DEBUG [StoreOpener-035aaee8d7dd517f8d7815de60a3d0d1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/unmovedTable/035aaee8d7dd517f8d7815de60a3d0d1/ut 2023-07-24 20:10:52,155 DEBUG [StoreOpener-035aaee8d7dd517f8d7815de60a3d0d1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/unmovedTable/035aaee8d7dd517f8d7815de60a3d0d1/ut 2023-07-24 20:10:52,156 INFO [StoreOpener-035aaee8d7dd517f8d7815de60a3d0d1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 035aaee8d7dd517f8d7815de60a3d0d1 columnFamilyName ut 2023-07-24 20:10:52,157 INFO [StoreOpener-035aaee8d7dd517f8d7815de60a3d0d1-1] regionserver.HStore(310): Store=035aaee8d7dd517f8d7815de60a3d0d1/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:52,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/unmovedTable/035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:52,159 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/unmovedTable/035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:52,162 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:52,163 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 035aaee8d7dd517f8d7815de60a3d0d1; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10106705920, jitterRate=-0.058739662170410156}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:52,163 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 035aaee8d7dd517f8d7815de60a3d0d1: 2023-07-24 20:10:52,164 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1., pid=119, masterSystemTime=1690229452144 2023-07-24 20:10:52,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. 2023-07-24 20:10:52,165 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. 2023-07-24 20:10:52,165 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=035aaee8d7dd517f8d7815de60a3d0d1, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:52,166 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690229452165"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229452165"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229452165"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229452165"}]},"ts":"1690229452165"} 2023-07-24 20:10:52,168 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=117 2023-07-24 20:10:52,168 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=117, state=SUCCESS; OpenRegionProcedure 035aaee8d7dd517f8d7815de60a3d0d1, server=jenkins-hbase4.apache.org,43675,1690229432113 in 174 msec 2023-07-24 20:10:52,169 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=035aaee8d7dd517f8d7815de60a3d0d1, REOPEN/MOVE in 494 msec 2023-07-24 20:10:52,361 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-24 20:10:52,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure.ProcedureSyncWait(216): waitFor pid=117 2023-07-24 20:10:52,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-24 20:10:52,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:52,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:52,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:52,681 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:52,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-24 20:10:52,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 20:10:52,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-24 20:10:52,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:52,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-24 20:10:52,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 20:10:52,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-24 20:10:52,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 20:10:52,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:52,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:52,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 20:10:52,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-24 20:10:52,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-24 20:10:52,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:52,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:52,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-24 20:10:52,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:52,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-24 20:10:52,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 20:10:52,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-24 20:10:52,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 20:10:52,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:52,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:52,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-24 20:10:52,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 20:10:52,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:52,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:52,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 20:10:52,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 20:10:52,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-24 20:10:52,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(345): Moving region 035aaee8d7dd517f8d7815de60a3d0d1 to RSGroup default 2023-07-24 20:10:52,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=035aaee8d7dd517f8d7815de60a3d0d1, REOPEN/MOVE 2023-07-24 20:10:52,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-24 20:10:52,712 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=035aaee8d7dd517f8d7815de60a3d0d1, REOPEN/MOVE 2023-07-24 20:10:52,712 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=035aaee8d7dd517f8d7815de60a3d0d1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:52,713 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690229452712"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229452712"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229452712"}]},"ts":"1690229452712"} 2023-07-24 20:10:52,714 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; CloseRegionProcedure 035aaee8d7dd517f8d7815de60a3d0d1, server=jenkins-hbase4.apache.org,43675,1690229432113}] 2023-07-24 20:10:52,867 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:52,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 035aaee8d7dd517f8d7815de60a3d0d1, disabling compactions & flushes 2023-07-24 20:10:52,868 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. 2023-07-24 20:10:52,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. 2023-07-24 20:10:52,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. after waiting 0 ms 2023-07-24 20:10:52,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. 2023-07-24 20:10:52,873 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/unmovedTable/035aaee8d7dd517f8d7815de60a3d0d1/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 20:10:52,874 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. 2023-07-24 20:10:52,874 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 035aaee8d7dd517f8d7815de60a3d0d1: 2023-07-24 20:10:52,874 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 035aaee8d7dd517f8d7815de60a3d0d1 move to jenkins-hbase4.apache.org,45007,1690229432026 record at close sequenceid=5 2023-07-24 20:10:52,881 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:52,882 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=035aaee8d7dd517f8d7815de60a3d0d1, regionState=CLOSED 2023-07-24 20:10:52,882 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690229452882"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229452882"}]},"ts":"1690229452882"} 2023-07-24 20:10:52,885 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-24 20:10:52,885 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; CloseRegionProcedure 035aaee8d7dd517f8d7815de60a3d0d1, server=jenkins-hbase4.apache.org,43675,1690229432113 in 169 msec 2023-07-24 20:10:52,886 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=035aaee8d7dd517f8d7815de60a3d0d1, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45007,1690229432026; forceNewPlan=false, retain=false 2023-07-24 20:10:53,037 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=035aaee8d7dd517f8d7815de60a3d0d1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:53,037 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690229453037"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229453037"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229453037"}]},"ts":"1690229453037"} 2023-07-24 20:10:53,039 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=120, state=RUNNABLE; OpenRegionProcedure 035aaee8d7dd517f8d7815de60a3d0d1, server=jenkins-hbase4.apache.org,45007,1690229432026}] 2023-07-24 20:10:53,195 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. 2023-07-24 20:10:53,195 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 035aaee8d7dd517f8d7815de60a3d0d1, NAME => 'unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1.', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:10:53,195 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:53,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:53,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:53,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:53,197 INFO [StoreOpener-035aaee8d7dd517f8d7815de60a3d0d1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:53,198 DEBUG [StoreOpener-035aaee8d7dd517f8d7815de60a3d0d1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/unmovedTable/035aaee8d7dd517f8d7815de60a3d0d1/ut 2023-07-24 20:10:53,198 DEBUG [StoreOpener-035aaee8d7dd517f8d7815de60a3d0d1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/unmovedTable/035aaee8d7dd517f8d7815de60a3d0d1/ut 2023-07-24 20:10:53,199 INFO [StoreOpener-035aaee8d7dd517f8d7815de60a3d0d1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 035aaee8d7dd517f8d7815de60a3d0d1 columnFamilyName ut 2023-07-24 20:10:53,200 INFO [StoreOpener-035aaee8d7dd517f8d7815de60a3d0d1-1] regionserver.HStore(310): Store=035aaee8d7dd517f8d7815de60a3d0d1/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:53,200 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/unmovedTable/035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:53,202 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/unmovedTable/035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:53,205 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:53,206 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 035aaee8d7dd517f8d7815de60a3d0d1; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11224661120, jitterRate=0.045378029346466064}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:53,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 035aaee8d7dd517f8d7815de60a3d0d1: 2023-07-24 20:10:53,207 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1., pid=122, masterSystemTime=1690229453191 2023-07-24 20:10:53,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. 2023-07-24 20:10:53,209 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. 2023-07-24 20:10:53,209 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=035aaee8d7dd517f8d7815de60a3d0d1, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:53,209 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690229453209"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229453209"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229453209"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229453209"}]},"ts":"1690229453209"} 2023-07-24 20:10:53,212 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=120 2023-07-24 20:10:53,212 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=120, state=SUCCESS; OpenRegionProcedure 035aaee8d7dd517f8d7815de60a3d0d1, server=jenkins-hbase4.apache.org,45007,1690229432026 in 172 msec 2023-07-24 20:10:53,213 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=035aaee8d7dd517f8d7815de60a3d0d1, REOPEN/MOVE in 501 msec 2023-07-24 20:10:53,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure.ProcedureSyncWait(216): waitFor pid=120 2023-07-24 20:10:53,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-24 20:10:53,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:53,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43675] to rsgroup default 2023-07-24 20:10:53,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 20:10:53,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:53,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:53,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 20:10:53,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 20:10:53,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-24 20:10:53,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43675,1690229432113] are moved back to normal 2023-07-24 20:10:53,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-24 20:10:53,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:53,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-24 20:10:53,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:53,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:53,729 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 20:10:53,729 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-24 20:10:53,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:10:53,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:10:53,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:10:53,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:53,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 20:10:53,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:53,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 20:10:53,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:53,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 20:10:53,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 20:10:53,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:10:53,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-24 20:10:53,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:53,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 20:10:53,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:10:53,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-24 20:10:53,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(345): Moving region fb389637a88cedef2b8738060d31ce38 to RSGroup default 2023-07-24 20:10:53,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=fb389637a88cedef2b8738060d31ce38, REOPEN/MOVE 2023-07-24 20:10:53,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-24 20:10:53,748 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=fb389637a88cedef2b8738060d31ce38, REOPEN/MOVE 2023-07-24 20:10:53,749 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=fb389637a88cedef2b8738060d31ce38, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:53,749 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690229453749"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229453749"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229453749"}]},"ts":"1690229453749"} 2023-07-24 20:10:53,750 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure fb389637a88cedef2b8738060d31ce38, server=jenkins-hbase4.apache.org,36633,1690229435886}] 2023-07-24 20:10:53,903 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:53,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fb389637a88cedef2b8738060d31ce38, disabling compactions & flushes 2023-07-24 20:10:53,904 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. 2023-07-24 20:10:53,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. 2023-07-24 20:10:53,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. after waiting 0 ms 2023-07-24 20:10:53,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. 2023-07-24 20:10:53,908 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/testRename/fb389637a88cedef2b8738060d31ce38/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 20:10:53,910 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. 2023-07-24 20:10:53,910 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fb389637a88cedef2b8738060d31ce38: 2023-07-24 20:10:53,910 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding fb389637a88cedef2b8738060d31ce38 move to jenkins-hbase4.apache.org,43675,1690229432113 record at close sequenceid=5 2023-07-24 20:10:53,911 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:53,912 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=fb389637a88cedef2b8738060d31ce38, regionState=CLOSED 2023-07-24 20:10:53,912 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690229453912"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229453912"}]},"ts":"1690229453912"} 2023-07-24 20:10:53,915 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-24 20:10:53,915 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure fb389637a88cedef2b8738060d31ce38, server=jenkins-hbase4.apache.org,36633,1690229435886 in 163 msec 2023-07-24 20:10:53,916 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=fb389637a88cedef2b8738060d31ce38, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43675,1690229432113; forceNewPlan=false, retain=false 2023-07-24 20:10:54,066 INFO [jenkins-hbase4:43005] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 20:10:54,066 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=fb389637a88cedef2b8738060d31ce38, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:54,066 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690229454066"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229454066"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229454066"}]},"ts":"1690229454066"} 2023-07-24 20:10:54,068 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure fb389637a88cedef2b8738060d31ce38, server=jenkins-hbase4.apache.org,43675,1690229432113}] 2023-07-24 20:10:54,224 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. 2023-07-24 20:10:54,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fb389637a88cedef2b8738060d31ce38, NAME => 'testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38.', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:10:54,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:54,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:54,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:54,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:54,226 INFO [StoreOpener-fb389637a88cedef2b8738060d31ce38-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:54,227 DEBUG [StoreOpener-fb389637a88cedef2b8738060d31ce38-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/testRename/fb389637a88cedef2b8738060d31ce38/tr 2023-07-24 20:10:54,227 DEBUG [StoreOpener-fb389637a88cedef2b8738060d31ce38-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/testRename/fb389637a88cedef2b8738060d31ce38/tr 2023-07-24 20:10:54,228 INFO [StoreOpener-fb389637a88cedef2b8738060d31ce38-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fb389637a88cedef2b8738060d31ce38 columnFamilyName tr 2023-07-24 20:10:54,229 INFO [StoreOpener-fb389637a88cedef2b8738060d31ce38-1] regionserver.HStore(310): Store=fb389637a88cedef2b8738060d31ce38/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:54,230 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/testRename/fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:54,232 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/testRename/fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:54,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:54,237 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fb389637a88cedef2b8738060d31ce38; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11875419680, jitterRate=0.1059846431016922}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:54,237 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fb389637a88cedef2b8738060d31ce38: 2023-07-24 20:10:54,238 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38., pid=125, masterSystemTime=1690229454219 2023-07-24 20:10:54,242 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. 2023-07-24 20:10:54,242 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. 2023-07-24 20:10:54,243 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=fb389637a88cedef2b8738060d31ce38, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:54,243 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690229454243"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229454243"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229454243"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229454243"}]},"ts":"1690229454243"} 2023-07-24 20:10:54,246 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-24 20:10:54,246 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure fb389637a88cedef2b8738060d31ce38, server=jenkins-hbase4.apache.org,43675,1690229432113 in 176 msec 2023-07-24 20:10:54,248 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=fb389637a88cedef2b8738060d31ce38, REOPEN/MOVE in 499 msec 2023-07-24 20:10:54,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-24 20:10:54,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-24 20:10:54,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:54,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:36633] to rsgroup default 2023-07-24 20:10:54,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:54,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 20:10:54,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:10:54,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-24 20:10:54,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36167,1690229431901, jenkins-hbase4.apache.org,36633,1690229435886] are moved back to newgroup 2023-07-24 20:10:54,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-24 20:10:54,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:54,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-24 20:10:54,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:54,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 20:10:54,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:10:54,766 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 20:10:54,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 20:10:54,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:54,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:54,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:10:54,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:10:54,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:54,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:54,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43005] to rsgroup master 2023-07-24 20:10:54,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:54,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 761 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60832 deadline: 1690230654779, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. 2023-07-24 20:10:54,779 WARN [Listener at localhost/39801] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 20:10:54,781 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:54,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:54,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:54,782 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:36633, jenkins-hbase4.apache.org:43675, jenkins-hbase4.apache.org:45007], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 20:10:54,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:10:54,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:54,799 INFO [Listener at localhost/39801] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=491 (was 499), OpenFileDescriptor=737 (was 740), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=408 (was 426), ProcessCount=176 (was 176), AvailableMemoryMB=5751 (was 5841) 2023-07-24 20:10:54,816 INFO [Listener at localhost/39801] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=491, OpenFileDescriptor=737, MaxFileDescriptor=60000, SystemLoadAverage=408, ProcessCount=176, AvailableMemoryMB=5750 2023-07-24 20:10:54,816 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-24 20:10:54,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:54,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:54,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:10:54,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:10:54,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:54,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 20:10:54,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:54,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 20:10:54,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:54,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 20:10:54,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:10:54,831 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 20:10:54,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 20:10:54,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:54,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:54,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:10:54,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:10:54,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:54,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:54,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43005] to rsgroup master 2023-07-24 20:10:54,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:54,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 789 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60832 deadline: 1690230654841, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. 2023-07-24 20:10:54,842 WARN [Listener at localhost/39801] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 20:10:54,843 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:54,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:54,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:54,844 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:36633, jenkins-hbase4.apache.org:43675, jenkins-hbase4.apache.org:45007], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 20:10:54,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:10:54,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:54,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-24 20:10:54,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 20:10:54,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-24 20:10:54,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-24 20:10:54,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-24 20:10:54,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:54,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-24 20:10:54,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:54,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 801 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:60832 deadline: 1690230654854, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-24 20:10:54,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-24 20:10:54,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:54,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 804 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:60832 deadline: 1690230654856, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-24 20:10:54,860 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-24 20:10:54,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-24 20:10:54,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-24 20:10:54,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:54,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 808 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:60832 deadline: 1690230654866, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-24 20:10:54,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:54,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:54,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:10:54,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:10:54,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:54,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 20:10:54,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:54,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 20:10:54,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:54,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 20:10:54,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:10:54,882 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 20:10:54,882 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 20:10:54,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:54,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:54,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:10:54,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:10:54,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:54,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:54,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43005] to rsgroup master 2023-07-24 20:10:54,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:54,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 832 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60832 deadline: 1690230654894, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. 2023-07-24 20:10:54,898 WARN [Listener at localhost/39801] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 20:10:54,900 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:54,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:54,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:54,901 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:36633, jenkins-hbase4.apache.org:43675, jenkins-hbase4.apache.org:45007], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 20:10:54,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:10:54,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:54,921 INFO [Listener at localhost/39801] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=495 (was 491) Potentially hanging thread: hconnection-0x660b33c7-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3f04a498-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3f04a498-shared-pool-23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x660b33c7-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=737 (was 737), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=408 (was 408), ProcessCount=176 (was 176), AvailableMemoryMB=5745 (was 5750) 2023-07-24 20:10:54,939 INFO [Listener at localhost/39801] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=495, OpenFileDescriptor=737, MaxFileDescriptor=60000, SystemLoadAverage=408, ProcessCount=176, AvailableMemoryMB=5744 2023-07-24 20:10:54,940 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-24 20:10:54,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:54,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:54,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:10:54,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:10:54,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:54,946 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 20:10:54,946 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:54,947 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 20:10:54,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:54,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 20:10:54,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:10:54,961 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 20:10:54,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 20:10:54,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:54,965 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:54,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:10:54,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:10:54,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:54,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:54,974 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43005] to rsgroup master 2023-07-24 20:10:54,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:54,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 860 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60832 deadline: 1690230654974, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. 2023-07-24 20:10:54,975 WARN [Listener at localhost/39801] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 20:10:54,976 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:54,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:54,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:54,978 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:36633, jenkins-hbase4.apache.org:43675, jenkins-hbase4.apache.org:45007], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 20:10:54,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:10:54,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:54,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:10:54,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:54,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_1766431688 2023-07-24 20:10:54,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:54,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1766431688 2023-07-24 20:10:54,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:54,987 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:54,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:10:55,001 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:55,001 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:55,003 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:36633] to rsgroup Group_testDisabledTableMove_1766431688 2023-07-24 20:10:55,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:55,006 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1766431688 2023-07-24 20:10:55,006 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:55,006 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:55,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 20:10:55,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36167,1690229431901, jenkins-hbase4.apache.org,36633,1690229435886] are moved back to default 2023-07-24 20:10:55,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_1766431688 2023-07-24 20:10:55,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:55,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:55,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:55,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_1766431688 2023-07-24 20:10:55,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:55,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 20:10:55,018 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-24 20:10:55,020 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 20:10:55,020 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 126 2023-07-24 20:10:55,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-24 20:10:55,022 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:55,022 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1766431688 2023-07-24 20:10:55,023 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:55,023 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:55,025 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 20:10:55,039 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/29dbb36222266d5592bc66a5bba6e4bd 2023-07-24 20:10:55,039 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/be5392d6e06ecee19e2100a86e5b546e 2023-07-24 20:10:55,039 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/794ba90e4d49fd6fee9c0996deb51646 2023-07-24 20:10:55,040 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/d15df88c035f9adf47c3a9543e61bacf 2023-07-24 20:10:55,040 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/a29dfa03b68f1edf73c5acf45c0bf5a2 2023-07-24 20:10:55,040 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/be5392d6e06ecee19e2100a86e5b546e empty. 2023-07-24 20:10:55,040 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/29dbb36222266d5592bc66a5bba6e4bd empty. 2023-07-24 20:10:55,041 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/794ba90e4d49fd6fee9c0996deb51646 empty. 2023-07-24 20:10:55,041 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/29dbb36222266d5592bc66a5bba6e4bd 2023-07-24 20:10:55,041 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/d15df88c035f9adf47c3a9543e61bacf empty. 2023-07-24 20:10:55,041 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/be5392d6e06ecee19e2100a86e5b546e 2023-07-24 20:10:55,041 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/a29dfa03b68f1edf73c5acf45c0bf5a2 empty. 2023-07-24 20:10:55,042 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/a29dfa03b68f1edf73c5acf45c0bf5a2 2023-07-24 20:10:55,042 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/794ba90e4d49fd6fee9c0996deb51646 2023-07-24 20:10:55,042 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/d15df88c035f9adf47c3a9543e61bacf 2023-07-24 20:10:55,042 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-24 20:10:55,068 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-24 20:10:55,069 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 29dbb36222266d5592bc66a5bba6e4bd, NAME => 'Group_testDisabledTableMove,,1690229455016.29dbb36222266d5592bc66a5bba6e4bd.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp 2023-07-24 20:10:55,070 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => be5392d6e06ecee19e2100a86e5b546e, NAME => 'Group_testDisabledTableMove,aaaaa,1690229455016.be5392d6e06ecee19e2100a86e5b546e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp 2023-07-24 20:10:55,070 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 794ba90e4d49fd6fee9c0996deb51646, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690229455016.794ba90e4d49fd6fee9c0996deb51646.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp 2023-07-24 20:10:55,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-24 20:10:55,154 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1690229455016.794ba90e4d49fd6fee9c0996deb51646.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:55,154 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 794ba90e4d49fd6fee9c0996deb51646, disabling compactions & flushes 2023-07-24 20:10:55,154 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690229455016.794ba90e4d49fd6fee9c0996deb51646. 2023-07-24 20:10:55,154 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690229455016.794ba90e4d49fd6fee9c0996deb51646. 2023-07-24 20:10:55,154 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690229455016.794ba90e4d49fd6fee9c0996deb51646. after waiting 0 ms 2023-07-24 20:10:55,154 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690229455016.794ba90e4d49fd6fee9c0996deb51646. 2023-07-24 20:10:55,155 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1690229455016.794ba90e4d49fd6fee9c0996deb51646. 2023-07-24 20:10:55,155 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 794ba90e4d49fd6fee9c0996deb51646: 2023-07-24 20:10:55,155 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1690229455016.be5392d6e06ecee19e2100a86e5b546e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:55,155 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => d15df88c035f9adf47c3a9543e61bacf, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690229455016.d15df88c035f9adf47c3a9543e61bacf.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp 2023-07-24 20:10:55,155 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing be5392d6e06ecee19e2100a86e5b546e, disabling compactions & flushes 2023-07-24 20:10:55,155 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1690229455016.be5392d6e06ecee19e2100a86e5b546e. 2023-07-24 20:10:55,155 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1690229455016.be5392d6e06ecee19e2100a86e5b546e. 2023-07-24 20:10:55,155 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1690229455016.be5392d6e06ecee19e2100a86e5b546e. after waiting 0 ms 2023-07-24 20:10:55,156 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1690229455016.be5392d6e06ecee19e2100a86e5b546e. 2023-07-24 20:10:55,156 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1690229455016.be5392d6e06ecee19e2100a86e5b546e. 2023-07-24 20:10:55,156 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for be5392d6e06ecee19e2100a86e5b546e: 2023-07-24 20:10:55,156 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => a29dfa03b68f1edf73c5acf45c0bf5a2, NAME => 'Group_testDisabledTableMove,zzzzz,1690229455016.a29dfa03b68f1edf73c5acf45c0bf5a2.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp 2023-07-24 20:10:55,179 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690229455016.d15df88c035f9adf47c3a9543e61bacf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:55,179 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing d15df88c035f9adf47c3a9543e61bacf, disabling compactions & flushes 2023-07-24 20:10:55,179 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690229455016.d15df88c035f9adf47c3a9543e61bacf. 2023-07-24 20:10:55,179 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690229455016.d15df88c035f9adf47c3a9543e61bacf. 2023-07-24 20:10:55,179 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690229455016.d15df88c035f9adf47c3a9543e61bacf. after waiting 0 ms 2023-07-24 20:10:55,179 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690229455016.d15df88c035f9adf47c3a9543e61bacf. 2023-07-24 20:10:55,179 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690229455016.d15df88c035f9adf47c3a9543e61bacf. 2023-07-24 20:10:55,179 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for d15df88c035f9adf47c3a9543e61bacf: 2023-07-24 20:10:55,186 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1690229455016.a29dfa03b68f1edf73c5acf45c0bf5a2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:55,186 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing a29dfa03b68f1edf73c5acf45c0bf5a2, disabling compactions & flushes 2023-07-24 20:10:55,186 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1690229455016.a29dfa03b68f1edf73c5acf45c0bf5a2. 2023-07-24 20:10:55,187 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1690229455016.a29dfa03b68f1edf73c5acf45c0bf5a2. 2023-07-24 20:10:55,187 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1690229455016.a29dfa03b68f1edf73c5acf45c0bf5a2. after waiting 0 ms 2023-07-24 20:10:55,187 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1690229455016.a29dfa03b68f1edf73c5acf45c0bf5a2. 2023-07-24 20:10:55,187 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1690229455016.a29dfa03b68f1edf73c5acf45c0bf5a2. 2023-07-24 20:10:55,187 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for a29dfa03b68f1edf73c5acf45c0bf5a2: 2023-07-24 20:10:55,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-24 20:10:55,531 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1690229455016.29dbb36222266d5592bc66a5bba6e4bd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:55,531 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 29dbb36222266d5592bc66a5bba6e4bd, disabling compactions & flushes 2023-07-24 20:10:55,531 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1690229455016.29dbb36222266d5592bc66a5bba6e4bd. 2023-07-24 20:10:55,531 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1690229455016.29dbb36222266d5592bc66a5bba6e4bd. 2023-07-24 20:10:55,531 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1690229455016.29dbb36222266d5592bc66a5bba6e4bd. after waiting 0 ms 2023-07-24 20:10:55,531 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1690229455016.29dbb36222266d5592bc66a5bba6e4bd. 2023-07-24 20:10:55,531 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1690229455016.29dbb36222266d5592bc66a5bba6e4bd. 2023-07-24 20:10:55,531 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 29dbb36222266d5592bc66a5bba6e4bd: 2023-07-24 20:10:55,534 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 20:10:55,535 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690229455016.794ba90e4d49fd6fee9c0996deb51646.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690229455535"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229455535"}]},"ts":"1690229455535"} 2023-07-24 20:10:55,535 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1690229455016.be5392d6e06ecee19e2100a86e5b546e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690229455535"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229455535"}]},"ts":"1690229455535"} 2023-07-24 20:10:55,535 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690229455016.d15df88c035f9adf47c3a9543e61bacf.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690229455535"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229455535"}]},"ts":"1690229455535"} 2023-07-24 20:10:55,535 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1690229455016.a29dfa03b68f1edf73c5acf45c0bf5a2.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690229455535"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229455535"}]},"ts":"1690229455535"} 2023-07-24 20:10:55,535 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1690229455016.29dbb36222266d5592bc66a5bba6e4bd.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690229455535"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229455535"}]},"ts":"1690229455535"} 2023-07-24 20:10:55,537 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-24 20:10:55,538 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 20:10:55,538 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229455538"}]},"ts":"1690229455538"} 2023-07-24 20:10:55,539 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-24 20:10:55,546 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 20:10:55,546 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 20:10:55,546 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 20:10:55,546 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 20:10:55,547 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=29dbb36222266d5592bc66a5bba6e4bd, ASSIGN}, {pid=128, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=be5392d6e06ecee19e2100a86e5b546e, ASSIGN}, {pid=129, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=794ba90e4d49fd6fee9c0996deb51646, ASSIGN}, {pid=130, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d15df88c035f9adf47c3a9543e61bacf, ASSIGN}, {pid=131, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a29dfa03b68f1edf73c5acf45c0bf5a2, ASSIGN}] 2023-07-24 20:10:55,549 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=794ba90e4d49fd6fee9c0996deb51646, ASSIGN 2023-07-24 20:10:55,549 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=127, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=29dbb36222266d5592bc66a5bba6e4bd, ASSIGN 2023-07-24 20:10:55,549 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=128, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=be5392d6e06ecee19e2100a86e5b546e, ASSIGN 2023-07-24 20:10:55,550 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=130, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d15df88c035f9adf47c3a9543e61bacf, ASSIGN 2023-07-24 20:10:55,550 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=129, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=794ba90e4d49fd6fee9c0996deb51646, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43675,1690229432113; forceNewPlan=false, retain=false 2023-07-24 20:10:55,550 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=128, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=be5392d6e06ecee19e2100a86e5b546e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45007,1690229432026; forceNewPlan=false, retain=false 2023-07-24 20:10:55,550 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=131, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a29dfa03b68f1edf73c5acf45c0bf5a2, ASSIGN 2023-07-24 20:10:55,550 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=130, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d15df88c035f9adf47c3a9543e61bacf, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43675,1690229432113; forceNewPlan=false, retain=false 2023-07-24 20:10:55,550 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=127, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=29dbb36222266d5592bc66a5bba6e4bd, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43675,1690229432113; forceNewPlan=false, retain=false 2023-07-24 20:10:55,551 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=131, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a29dfa03b68f1edf73c5acf45c0bf5a2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45007,1690229432026; forceNewPlan=false, retain=false 2023-07-24 20:10:55,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-24 20:10:55,700 INFO [jenkins-hbase4:43005] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-24 20:10:55,704 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=a29dfa03b68f1edf73c5acf45c0bf5a2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:55,704 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=127 updating hbase:meta row=29dbb36222266d5592bc66a5bba6e4bd, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:55,704 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=128 updating hbase:meta row=be5392d6e06ecee19e2100a86e5b546e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:55,704 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=794ba90e4d49fd6fee9c0996deb51646, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:55,704 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=130 updating hbase:meta row=d15df88c035f9adf47c3a9543e61bacf, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:55,704 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690229455016.794ba90e4d49fd6fee9c0996deb51646.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690229455704"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229455704"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229455704"}]},"ts":"1690229455704"} 2023-07-24 20:10:55,704 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1690229455016.be5392d6e06ecee19e2100a86e5b546e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690229455704"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229455704"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229455704"}]},"ts":"1690229455704"} 2023-07-24 20:10:55,704 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1690229455016.29dbb36222266d5592bc66a5bba6e4bd.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690229455704"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229455704"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229455704"}]},"ts":"1690229455704"} 2023-07-24 20:10:55,704 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1690229455016.a29dfa03b68f1edf73c5acf45c0bf5a2.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690229455704"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229455704"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229455704"}]},"ts":"1690229455704"} 2023-07-24 20:10:55,704 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690229455016.d15df88c035f9adf47c3a9543e61bacf.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690229455704"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229455704"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229455704"}]},"ts":"1690229455704"} 2023-07-24 20:10:55,706 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=132, ppid=129, state=RUNNABLE; OpenRegionProcedure 794ba90e4d49fd6fee9c0996deb51646, server=jenkins-hbase4.apache.org,43675,1690229432113}] 2023-07-24 20:10:55,706 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=128, state=RUNNABLE; OpenRegionProcedure be5392d6e06ecee19e2100a86e5b546e, server=jenkins-hbase4.apache.org,45007,1690229432026}] 2023-07-24 20:10:55,707 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=134, ppid=127, state=RUNNABLE; OpenRegionProcedure 29dbb36222266d5592bc66a5bba6e4bd, server=jenkins-hbase4.apache.org,43675,1690229432113}] 2023-07-24 20:10:55,709 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=135, ppid=131, state=RUNNABLE; OpenRegionProcedure a29dfa03b68f1edf73c5acf45c0bf5a2, server=jenkins-hbase4.apache.org,45007,1690229432026}] 2023-07-24 20:10:55,710 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=130, state=RUNNABLE; OpenRegionProcedure d15df88c035f9adf47c3a9543e61bacf, server=jenkins-hbase4.apache.org,43675,1690229432113}] 2023-07-24 20:10:55,862 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1690229455016.29dbb36222266d5592bc66a5bba6e4bd. 2023-07-24 20:10:55,862 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 29dbb36222266d5592bc66a5bba6e4bd, NAME => 'Group_testDisabledTableMove,,1690229455016.29dbb36222266d5592bc66a5bba6e4bd.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-24 20:10:55,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 29dbb36222266d5592bc66a5bba6e4bd 2023-07-24 20:10:55,863 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1690229455016.be5392d6e06ecee19e2100a86e5b546e. 2023-07-24 20:10:55,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1690229455016.29dbb36222266d5592bc66a5bba6e4bd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:55,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 29dbb36222266d5592bc66a5bba6e4bd 2023-07-24 20:10:55,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => be5392d6e06ecee19e2100a86e5b546e, NAME => 'Group_testDisabledTableMove,aaaaa,1690229455016.be5392d6e06ecee19e2100a86e5b546e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-24 20:10:55,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 29dbb36222266d5592bc66a5bba6e4bd 2023-07-24 20:10:55,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove be5392d6e06ecee19e2100a86e5b546e 2023-07-24 20:10:55,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1690229455016.be5392d6e06ecee19e2100a86e5b546e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:55,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for be5392d6e06ecee19e2100a86e5b546e 2023-07-24 20:10:55,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for be5392d6e06ecee19e2100a86e5b546e 2023-07-24 20:10:55,864 INFO [StoreOpener-29dbb36222266d5592bc66a5bba6e4bd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 29dbb36222266d5592bc66a5bba6e4bd 2023-07-24 20:10:55,864 INFO [StoreOpener-be5392d6e06ecee19e2100a86e5b546e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region be5392d6e06ecee19e2100a86e5b546e 2023-07-24 20:10:55,866 DEBUG [StoreOpener-29dbb36222266d5592bc66a5bba6e4bd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/29dbb36222266d5592bc66a5bba6e4bd/f 2023-07-24 20:10:55,866 DEBUG [StoreOpener-be5392d6e06ecee19e2100a86e5b546e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/be5392d6e06ecee19e2100a86e5b546e/f 2023-07-24 20:10:55,866 DEBUG [StoreOpener-be5392d6e06ecee19e2100a86e5b546e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/be5392d6e06ecee19e2100a86e5b546e/f 2023-07-24 20:10:55,866 DEBUG [StoreOpener-29dbb36222266d5592bc66a5bba6e4bd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/29dbb36222266d5592bc66a5bba6e4bd/f 2023-07-24 20:10:55,866 INFO [StoreOpener-be5392d6e06ecee19e2100a86e5b546e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region be5392d6e06ecee19e2100a86e5b546e columnFamilyName f 2023-07-24 20:10:55,866 INFO [StoreOpener-29dbb36222266d5592bc66a5bba6e4bd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 29dbb36222266d5592bc66a5bba6e4bd columnFamilyName f 2023-07-24 20:10:55,867 INFO [StoreOpener-be5392d6e06ecee19e2100a86e5b546e-1] regionserver.HStore(310): Store=be5392d6e06ecee19e2100a86e5b546e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:55,867 INFO [StoreOpener-29dbb36222266d5592bc66a5bba6e4bd-1] regionserver.HStore(310): Store=29dbb36222266d5592bc66a5bba6e4bd/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:55,868 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/be5392d6e06ecee19e2100a86e5b546e 2023-07-24 20:10:55,868 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/29dbb36222266d5592bc66a5bba6e4bd 2023-07-24 20:10:55,868 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/be5392d6e06ecee19e2100a86e5b546e 2023-07-24 20:10:55,868 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/29dbb36222266d5592bc66a5bba6e4bd 2023-07-24 20:10:55,871 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for be5392d6e06ecee19e2100a86e5b546e 2023-07-24 20:10:55,871 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 29dbb36222266d5592bc66a5bba6e4bd 2023-07-24 20:10:55,873 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/be5392d6e06ecee19e2100a86e5b546e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:10:55,875 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened be5392d6e06ecee19e2100a86e5b546e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10280348480, jitterRate=-0.042567938566207886}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:55,875 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for be5392d6e06ecee19e2100a86e5b546e: 2023-07-24 20:10:55,875 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/29dbb36222266d5592bc66a5bba6e4bd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:10:55,875 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1690229455016.be5392d6e06ecee19e2100a86e5b546e., pid=133, masterSystemTime=1690229455859 2023-07-24 20:10:55,876 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 29dbb36222266d5592bc66a5bba6e4bd; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9826729440, jitterRate=-0.08481450378894806}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:55,876 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 29dbb36222266d5592bc66a5bba6e4bd: 2023-07-24 20:10:55,876 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1690229455016.29dbb36222266d5592bc66a5bba6e4bd., pid=134, masterSystemTime=1690229455858 2023-07-24 20:10:55,877 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1690229455016.be5392d6e06ecee19e2100a86e5b546e. 2023-07-24 20:10:55,877 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1690229455016.be5392d6e06ecee19e2100a86e5b546e. 2023-07-24 20:10:55,877 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1690229455016.a29dfa03b68f1edf73c5acf45c0bf5a2. 2023-07-24 20:10:55,877 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a29dfa03b68f1edf73c5acf45c0bf5a2, NAME => 'Group_testDisabledTableMove,zzzzz,1690229455016.a29dfa03b68f1edf73c5acf45c0bf5a2.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-24 20:10:55,877 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=128 updating hbase:meta row=be5392d6e06ecee19e2100a86e5b546e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:55,877 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove a29dfa03b68f1edf73c5acf45c0bf5a2 2023-07-24 20:10:55,877 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1690229455016.be5392d6e06ecee19e2100a86e5b546e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690229455877"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229455877"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229455877"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229455877"}]},"ts":"1690229455877"} 2023-07-24 20:10:55,877 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1690229455016.29dbb36222266d5592bc66a5bba6e4bd. 2023-07-24 20:10:55,877 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1690229455016.a29dfa03b68f1edf73c5acf45c0bf5a2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:55,878 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a29dfa03b68f1edf73c5acf45c0bf5a2 2023-07-24 20:10:55,878 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1690229455016.29dbb36222266d5592bc66a5bba6e4bd. 2023-07-24 20:10:55,878 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a29dfa03b68f1edf73c5acf45c0bf5a2 2023-07-24 20:10:55,878 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=127 updating hbase:meta row=29dbb36222266d5592bc66a5bba6e4bd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:55,878 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1690229455016.794ba90e4d49fd6fee9c0996deb51646. 2023-07-24 20:10:55,878 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1690229455016.29dbb36222266d5592bc66a5bba6e4bd.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690229455878"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229455878"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229455878"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229455878"}]},"ts":"1690229455878"} 2023-07-24 20:10:55,878 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 794ba90e4d49fd6fee9c0996deb51646, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690229455016.794ba90e4d49fd6fee9c0996deb51646.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-24 20:10:55,878 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 794ba90e4d49fd6fee9c0996deb51646 2023-07-24 20:10:55,878 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1690229455016.794ba90e4d49fd6fee9c0996deb51646.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:55,878 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 794ba90e4d49fd6fee9c0996deb51646 2023-07-24 20:10:55,879 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 794ba90e4d49fd6fee9c0996deb51646 2023-07-24 20:10:55,879 INFO [StoreOpener-a29dfa03b68f1edf73c5acf45c0bf5a2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a29dfa03b68f1edf73c5acf45c0bf5a2 2023-07-24 20:10:55,880 INFO [StoreOpener-794ba90e4d49fd6fee9c0996deb51646-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 794ba90e4d49fd6fee9c0996deb51646 2023-07-24 20:10:55,880 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=128 2023-07-24 20:10:55,881 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=128, state=SUCCESS; OpenRegionProcedure be5392d6e06ecee19e2100a86e5b546e, server=jenkins-hbase4.apache.org,45007,1690229432026 in 173 msec 2023-07-24 20:10:55,881 DEBUG [StoreOpener-a29dfa03b68f1edf73c5acf45c0bf5a2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/a29dfa03b68f1edf73c5acf45c0bf5a2/f 2023-07-24 20:10:55,881 DEBUG [StoreOpener-a29dfa03b68f1edf73c5acf45c0bf5a2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/a29dfa03b68f1edf73c5acf45c0bf5a2/f 2023-07-24 20:10:55,881 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=127 2023-07-24 20:10:55,881 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=127, state=SUCCESS; OpenRegionProcedure 29dbb36222266d5592bc66a5bba6e4bd, server=jenkins-hbase4.apache.org,43675,1690229432113 in 172 msec 2023-07-24 20:10:55,881 INFO [StoreOpener-a29dfa03b68f1edf73c5acf45c0bf5a2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a29dfa03b68f1edf73c5acf45c0bf5a2 columnFamilyName f 2023-07-24 20:10:55,881 DEBUG [StoreOpener-794ba90e4d49fd6fee9c0996deb51646-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/794ba90e4d49fd6fee9c0996deb51646/f 2023-07-24 20:10:55,882 DEBUG [StoreOpener-794ba90e4d49fd6fee9c0996deb51646-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/794ba90e4d49fd6fee9c0996deb51646/f 2023-07-24 20:10:55,882 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=be5392d6e06ecee19e2100a86e5b546e, ASSIGN in 334 msec 2023-07-24 20:10:55,882 INFO [StoreOpener-a29dfa03b68f1edf73c5acf45c0bf5a2-1] regionserver.HStore(310): Store=a29dfa03b68f1edf73c5acf45c0bf5a2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:55,882 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=29dbb36222266d5592bc66a5bba6e4bd, ASSIGN in 334 msec 2023-07-24 20:10:55,882 INFO [StoreOpener-794ba90e4d49fd6fee9c0996deb51646-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 794ba90e4d49fd6fee9c0996deb51646 columnFamilyName f 2023-07-24 20:10:55,883 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/a29dfa03b68f1edf73c5acf45c0bf5a2 2023-07-24 20:10:55,883 INFO [StoreOpener-794ba90e4d49fd6fee9c0996deb51646-1] regionserver.HStore(310): Store=794ba90e4d49fd6fee9c0996deb51646/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:55,883 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/a29dfa03b68f1edf73c5acf45c0bf5a2 2023-07-24 20:10:55,884 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/794ba90e4d49fd6fee9c0996deb51646 2023-07-24 20:10:55,884 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/794ba90e4d49fd6fee9c0996deb51646 2023-07-24 20:10:55,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a29dfa03b68f1edf73c5acf45c0bf5a2 2023-07-24 20:10:55,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 794ba90e4d49fd6fee9c0996deb51646 2023-07-24 20:10:55,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/a29dfa03b68f1edf73c5acf45c0bf5a2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:10:55,888 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a29dfa03b68f1edf73c5acf45c0bf5a2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10338524640, jitterRate=-0.037149861454963684}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:55,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a29dfa03b68f1edf73c5acf45c0bf5a2: 2023-07-24 20:10:55,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/794ba90e4d49fd6fee9c0996deb51646/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:10:55,889 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1690229455016.a29dfa03b68f1edf73c5acf45c0bf5a2., pid=135, masterSystemTime=1690229455859 2023-07-24 20:10:55,889 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 794ba90e4d49fd6fee9c0996deb51646; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10903745760, jitterRate=0.01549045741558075}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:55,889 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 794ba90e4d49fd6fee9c0996deb51646: 2023-07-24 20:10:55,889 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1690229455016.794ba90e4d49fd6fee9c0996deb51646., pid=132, masterSystemTime=1690229455858 2023-07-24 20:10:55,890 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1690229455016.a29dfa03b68f1edf73c5acf45c0bf5a2. 2023-07-24 20:10:55,890 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1690229455016.a29dfa03b68f1edf73c5acf45c0bf5a2. 2023-07-24 20:10:55,890 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=a29dfa03b68f1edf73c5acf45c0bf5a2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:55,890 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1690229455016.a29dfa03b68f1edf73c5acf45c0bf5a2.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690229455890"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229455890"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229455890"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229455890"}]},"ts":"1690229455890"} 2023-07-24 20:10:55,890 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1690229455016.794ba90e4d49fd6fee9c0996deb51646. 2023-07-24 20:10:55,891 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1690229455016.794ba90e4d49fd6fee9c0996deb51646. 2023-07-24 20:10:55,891 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690229455016.d15df88c035f9adf47c3a9543e61bacf. 2023-07-24 20:10:55,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d15df88c035f9adf47c3a9543e61bacf, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690229455016.d15df88c035f9adf47c3a9543e61bacf.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-24 20:10:55,891 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=794ba90e4d49fd6fee9c0996deb51646, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:55,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove d15df88c035f9adf47c3a9543e61bacf 2023-07-24 20:10:55,891 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690229455016.794ba90e4d49fd6fee9c0996deb51646.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690229455891"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229455891"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229455891"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229455891"}]},"ts":"1690229455891"} 2023-07-24 20:10:55,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690229455016.d15df88c035f9adf47c3a9543e61bacf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:55,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d15df88c035f9adf47c3a9543e61bacf 2023-07-24 20:10:55,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d15df88c035f9adf47c3a9543e61bacf 2023-07-24 20:10:55,892 INFO [StoreOpener-d15df88c035f9adf47c3a9543e61bacf-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d15df88c035f9adf47c3a9543e61bacf 2023-07-24 20:10:55,893 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=135, resume processing ppid=131 2023-07-24 20:10:55,893 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=131, state=SUCCESS; OpenRegionProcedure a29dfa03b68f1edf73c5acf45c0bf5a2, server=jenkins-hbase4.apache.org,45007,1690229432026 in 182 msec 2023-07-24 20:10:55,894 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=132, resume processing ppid=129 2023-07-24 20:10:55,894 DEBUG [StoreOpener-d15df88c035f9adf47c3a9543e61bacf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/d15df88c035f9adf47c3a9543e61bacf/f 2023-07-24 20:10:55,894 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=132, ppid=129, state=SUCCESS; OpenRegionProcedure 794ba90e4d49fd6fee9c0996deb51646, server=jenkins-hbase4.apache.org,43675,1690229432113 in 186 msec 2023-07-24 20:10:55,894 DEBUG [StoreOpener-d15df88c035f9adf47c3a9543e61bacf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/d15df88c035f9adf47c3a9543e61bacf/f 2023-07-24 20:10:55,894 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a29dfa03b68f1edf73c5acf45c0bf5a2, ASSIGN in 346 msec 2023-07-24 20:10:55,894 INFO [StoreOpener-d15df88c035f9adf47c3a9543e61bacf-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d15df88c035f9adf47c3a9543e61bacf columnFamilyName f 2023-07-24 20:10:55,895 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=129, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=794ba90e4d49fd6fee9c0996deb51646, ASSIGN in 347 msec 2023-07-24 20:10:55,895 INFO [StoreOpener-d15df88c035f9adf47c3a9543e61bacf-1] regionserver.HStore(310): Store=d15df88c035f9adf47c3a9543e61bacf/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:55,895 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/d15df88c035f9adf47c3a9543e61bacf 2023-07-24 20:10:55,896 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/d15df88c035f9adf47c3a9543e61bacf 2023-07-24 20:10:55,898 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d15df88c035f9adf47c3a9543e61bacf 2023-07-24 20:10:55,900 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/d15df88c035f9adf47c3a9543e61bacf/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:10:55,900 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d15df88c035f9adf47c3a9543e61bacf; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9972373120, jitterRate=-0.07125037908554077}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:55,900 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d15df88c035f9adf47c3a9543e61bacf: 2023-07-24 20:10:55,901 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690229455016.d15df88c035f9adf47c3a9543e61bacf., pid=136, masterSystemTime=1690229455858 2023-07-24 20:10:55,902 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690229455016.d15df88c035f9adf47c3a9543e61bacf. 2023-07-24 20:10:55,902 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690229455016.d15df88c035f9adf47c3a9543e61bacf. 2023-07-24 20:10:55,902 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=130 updating hbase:meta row=d15df88c035f9adf47c3a9543e61bacf, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:55,902 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690229455016.d15df88c035f9adf47c3a9543e61bacf.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690229455902"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229455902"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229455902"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229455902"}]},"ts":"1690229455902"} 2023-07-24 20:10:55,905 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=136, resume processing ppid=130 2023-07-24 20:10:55,905 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=130, state=SUCCESS; OpenRegionProcedure d15df88c035f9adf47c3a9543e61bacf, server=jenkins-hbase4.apache.org,43675,1690229432113 in 193 msec 2023-07-24 20:10:55,906 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=126 2023-07-24 20:10:55,906 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d15df88c035f9adf47c3a9543e61bacf, ASSIGN in 358 msec 2023-07-24 20:10:55,907 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 20:10:55,907 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229455907"}]},"ts":"1690229455907"} 2023-07-24 20:10:55,908 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-24 20:10:55,910 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 20:10:55,911 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 893 msec 2023-07-24 20:10:56,016 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 20:10:56,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-24 20:10:56,133 INFO [Listener at localhost/39801] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 126 completed 2023-07-24 20:10:56,133 DEBUG [Listener at localhost/39801] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-24 20:10:56,133 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:56,137 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-24 20:10:56,137 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:56,137 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-24 20:10:56,138 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:56,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-24 20:10:56,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 20:10:56,145 INFO [Listener at localhost/39801] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-24 20:10:56,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-24 20:10:56,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=137, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-24 20:10:56,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=137 2023-07-24 20:10:56,149 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229456149"}]},"ts":"1690229456149"} 2023-07-24 20:10:56,150 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-24 20:10:56,152 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-24 20:10:56,153 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=29dbb36222266d5592bc66a5bba6e4bd, UNASSIGN}, {pid=139, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=be5392d6e06ecee19e2100a86e5b546e, UNASSIGN}, {pid=140, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=794ba90e4d49fd6fee9c0996deb51646, UNASSIGN}, {pid=141, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d15df88c035f9adf47c3a9543e61bacf, UNASSIGN}, {pid=142, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a29dfa03b68f1edf73c5acf45c0bf5a2, UNASSIGN}] 2023-07-24 20:10:56,155 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=139, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=be5392d6e06ecee19e2100a86e5b546e, UNASSIGN 2023-07-24 20:10:56,155 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=142, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a29dfa03b68f1edf73c5acf45c0bf5a2, UNASSIGN 2023-07-24 20:10:56,155 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=140, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=794ba90e4d49fd6fee9c0996deb51646, UNASSIGN 2023-07-24 20:10:56,155 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=138, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=29dbb36222266d5592bc66a5bba6e4bd, UNASSIGN 2023-07-24 20:10:56,156 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=be5392d6e06ecee19e2100a86e5b546e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:56,156 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=141, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d15df88c035f9adf47c3a9543e61bacf, UNASSIGN 2023-07-24 20:10:56,156 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1690229455016.be5392d6e06ecee19e2100a86e5b546e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690229456156"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229456156"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229456156"}]},"ts":"1690229456156"} 2023-07-24 20:10:56,156 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=a29dfa03b68f1edf73c5acf45c0bf5a2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:56,156 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1690229455016.a29dfa03b68f1edf73c5acf45c0bf5a2.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690229456156"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229456156"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229456156"}]},"ts":"1690229456156"} 2023-07-24 20:10:56,156 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=29dbb36222266d5592bc66a5bba6e4bd, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:56,156 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=794ba90e4d49fd6fee9c0996deb51646, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:56,156 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1690229455016.29dbb36222266d5592bc66a5bba6e4bd.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690229456156"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229456156"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229456156"}]},"ts":"1690229456156"} 2023-07-24 20:10:56,157 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690229455016.794ba90e4d49fd6fee9c0996deb51646.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690229456156"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229456156"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229456156"}]},"ts":"1690229456156"} 2023-07-24 20:10:56,157 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=d15df88c035f9adf47c3a9543e61bacf, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:56,157 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690229455016.d15df88c035f9adf47c3a9543e61bacf.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690229456157"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229456157"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229456157"}]},"ts":"1690229456157"} 2023-07-24 20:10:56,157 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=143, ppid=139, state=RUNNABLE; CloseRegionProcedure be5392d6e06ecee19e2100a86e5b546e, server=jenkins-hbase4.apache.org,45007,1690229432026}] 2023-07-24 20:10:56,158 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=142, state=RUNNABLE; CloseRegionProcedure a29dfa03b68f1edf73c5acf45c0bf5a2, server=jenkins-hbase4.apache.org,45007,1690229432026}] 2023-07-24 20:10:56,159 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=145, ppid=138, state=RUNNABLE; CloseRegionProcedure 29dbb36222266d5592bc66a5bba6e4bd, server=jenkins-hbase4.apache.org,43675,1690229432113}] 2023-07-24 20:10:56,159 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=146, ppid=140, state=RUNNABLE; CloseRegionProcedure 794ba90e4d49fd6fee9c0996deb51646, server=jenkins-hbase4.apache.org,43675,1690229432113}] 2023-07-24 20:10:56,160 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=141, state=RUNNABLE; CloseRegionProcedure d15df88c035f9adf47c3a9543e61bacf, server=jenkins-hbase4.apache.org,43675,1690229432113}] 2023-07-24 20:10:56,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=137 2023-07-24 20:10:56,310 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a29dfa03b68f1edf73c5acf45c0bf5a2 2023-07-24 20:10:56,311 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a29dfa03b68f1edf73c5acf45c0bf5a2, disabling compactions & flushes 2023-07-24 20:10:56,313 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1690229455016.a29dfa03b68f1edf73c5acf45c0bf5a2. 2023-07-24 20:10:56,313 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1690229455016.a29dfa03b68f1edf73c5acf45c0bf5a2. 2023-07-24 20:10:56,313 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1690229455016.a29dfa03b68f1edf73c5acf45c0bf5a2. after waiting 0 ms 2023-07-24 20:10:56,313 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1690229455016.a29dfa03b68f1edf73c5acf45c0bf5a2. 2023-07-24 20:10:56,314 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d15df88c035f9adf47c3a9543e61bacf 2023-07-24 20:10:56,315 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d15df88c035f9adf47c3a9543e61bacf, disabling compactions & flushes 2023-07-24 20:10:56,315 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690229455016.d15df88c035f9adf47c3a9543e61bacf. 2023-07-24 20:10:56,315 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690229455016.d15df88c035f9adf47c3a9543e61bacf. 2023-07-24 20:10:56,315 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690229455016.d15df88c035f9adf47c3a9543e61bacf. after waiting 0 ms 2023-07-24 20:10:56,315 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690229455016.d15df88c035f9adf47c3a9543e61bacf. 2023-07-24 20:10:56,318 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/a29dfa03b68f1edf73c5acf45c0bf5a2/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 20:10:56,321 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1690229455016.a29dfa03b68f1edf73c5acf45c0bf5a2. 2023-07-24 20:10:56,321 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a29dfa03b68f1edf73c5acf45c0bf5a2: 2023-07-24 20:10:56,321 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/d15df88c035f9adf47c3a9543e61bacf/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 20:10:56,323 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690229455016.d15df88c035f9adf47c3a9543e61bacf. 2023-07-24 20:10:56,323 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d15df88c035f9adf47c3a9543e61bacf: 2023-07-24 20:10:56,323 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a29dfa03b68f1edf73c5acf45c0bf5a2 2023-07-24 20:10:56,323 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close be5392d6e06ecee19e2100a86e5b546e 2023-07-24 20:10:56,324 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing be5392d6e06ecee19e2100a86e5b546e, disabling compactions & flushes 2023-07-24 20:10:56,324 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1690229455016.be5392d6e06ecee19e2100a86e5b546e. 2023-07-24 20:10:56,324 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1690229455016.be5392d6e06ecee19e2100a86e5b546e. 2023-07-24 20:10:56,324 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1690229455016.be5392d6e06ecee19e2100a86e5b546e. after waiting 0 ms 2023-07-24 20:10:56,324 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1690229455016.be5392d6e06ecee19e2100a86e5b546e. 2023-07-24 20:10:56,324 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=a29dfa03b68f1edf73c5acf45c0bf5a2, regionState=CLOSED 2023-07-24 20:10:56,324 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1690229455016.a29dfa03b68f1edf73c5acf45c0bf5a2.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690229456324"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229456324"}]},"ts":"1690229456324"} 2023-07-24 20:10:56,325 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d15df88c035f9adf47c3a9543e61bacf 2023-07-24 20:10:56,325 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 29dbb36222266d5592bc66a5bba6e4bd 2023-07-24 20:10:56,325 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 29dbb36222266d5592bc66a5bba6e4bd, disabling compactions & flushes 2023-07-24 20:10:56,325 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1690229455016.29dbb36222266d5592bc66a5bba6e4bd. 2023-07-24 20:10:56,326 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1690229455016.29dbb36222266d5592bc66a5bba6e4bd. 2023-07-24 20:10:56,326 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1690229455016.29dbb36222266d5592bc66a5bba6e4bd. after waiting 0 ms 2023-07-24 20:10:56,326 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1690229455016.29dbb36222266d5592bc66a5bba6e4bd. 2023-07-24 20:10:56,326 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=d15df88c035f9adf47c3a9543e61bacf, regionState=CLOSED 2023-07-24 20:10:56,326 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690229455016.d15df88c035f9adf47c3a9543e61bacf.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690229456326"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229456326"}]},"ts":"1690229456326"} 2023-07-24 20:10:56,328 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=142 2023-07-24 20:10:56,328 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=142, state=SUCCESS; CloseRegionProcedure a29dfa03b68f1edf73c5acf45c0bf5a2, server=jenkins-hbase4.apache.org,45007,1690229432026 in 168 msec 2023-07-24 20:10:56,329 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/be5392d6e06ecee19e2100a86e5b546e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 20:10:56,330 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1690229455016.be5392d6e06ecee19e2100a86e5b546e. 2023-07-24 20:10:56,330 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for be5392d6e06ecee19e2100a86e5b546e: 2023-07-24 20:10:56,330 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=147, resume processing ppid=141 2023-07-24 20:10:56,330 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=141, state=SUCCESS; CloseRegionProcedure d15df88c035f9adf47c3a9543e61bacf, server=jenkins-hbase4.apache.org,43675,1690229432113 in 167 msec 2023-07-24 20:10:56,330 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=137, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a29dfa03b68f1edf73c5acf45c0bf5a2, UNASSIGN in 175 msec 2023-07-24 20:10:56,331 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/29dbb36222266d5592bc66a5bba6e4bd/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 20:10:56,332 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1690229455016.29dbb36222266d5592bc66a5bba6e4bd. 2023-07-24 20:10:56,332 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 29dbb36222266d5592bc66a5bba6e4bd: 2023-07-24 20:10:56,332 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed be5392d6e06ecee19e2100a86e5b546e 2023-07-24 20:10:56,332 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=137, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d15df88c035f9adf47c3a9543e61bacf, UNASSIGN in 177 msec 2023-07-24 20:10:56,332 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=be5392d6e06ecee19e2100a86e5b546e, regionState=CLOSED 2023-07-24 20:10:56,332 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1690229455016.be5392d6e06ecee19e2100a86e5b546e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690229456332"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229456332"}]},"ts":"1690229456332"} 2023-07-24 20:10:56,333 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 29dbb36222266d5592bc66a5bba6e4bd 2023-07-24 20:10:56,333 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 794ba90e4d49fd6fee9c0996deb51646 2023-07-24 20:10:56,333 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 794ba90e4d49fd6fee9c0996deb51646, disabling compactions & flushes 2023-07-24 20:10:56,334 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690229455016.794ba90e4d49fd6fee9c0996deb51646. 2023-07-24 20:10:56,334 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690229455016.794ba90e4d49fd6fee9c0996deb51646. 2023-07-24 20:10:56,334 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690229455016.794ba90e4d49fd6fee9c0996deb51646. after waiting 0 ms 2023-07-24 20:10:56,334 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690229455016.794ba90e4d49fd6fee9c0996deb51646. 2023-07-24 20:10:56,334 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=29dbb36222266d5592bc66a5bba6e4bd, regionState=CLOSED 2023-07-24 20:10:56,334 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1690229455016.29dbb36222266d5592bc66a5bba6e4bd.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690229456334"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229456334"}]},"ts":"1690229456334"} 2023-07-24 20:10:56,336 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=143, resume processing ppid=139 2023-07-24 20:10:56,336 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=139, state=SUCCESS; CloseRegionProcedure be5392d6e06ecee19e2100a86e5b546e, server=jenkins-hbase4.apache.org,45007,1690229432026 in 177 msec 2023-07-24 20:10:56,337 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=138 2023-07-24 20:10:56,337 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=137, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=be5392d6e06ecee19e2100a86e5b546e, UNASSIGN in 183 msec 2023-07-24 20:10:56,337 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=138, state=SUCCESS; CloseRegionProcedure 29dbb36222266d5592bc66a5bba6e4bd, server=jenkins-hbase4.apache.org,43675,1690229432113 in 176 msec 2023-07-24 20:10:56,337 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/Group_testDisabledTableMove/794ba90e4d49fd6fee9c0996deb51646/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 20:10:56,338 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=137, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=29dbb36222266d5592bc66a5bba6e4bd, UNASSIGN in 184 msec 2023-07-24 20:10:56,338 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1690229455016.794ba90e4d49fd6fee9c0996deb51646. 2023-07-24 20:10:56,338 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 794ba90e4d49fd6fee9c0996deb51646: 2023-07-24 20:10:56,339 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 794ba90e4d49fd6fee9c0996deb51646 2023-07-24 20:10:56,339 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=794ba90e4d49fd6fee9c0996deb51646, regionState=CLOSED 2023-07-24 20:10:56,340 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690229455016.794ba90e4d49fd6fee9c0996deb51646.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690229456339"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229456339"}]},"ts":"1690229456339"} 2023-07-24 20:10:56,342 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=146, resume processing ppid=140 2023-07-24 20:10:56,342 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=140, state=SUCCESS; CloseRegionProcedure 794ba90e4d49fd6fee9c0996deb51646, server=jenkins-hbase4.apache.org,43675,1690229432113 in 181 msec 2023-07-24 20:10:56,343 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=137 2023-07-24 20:10:56,343 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=137, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=794ba90e4d49fd6fee9c0996deb51646, UNASSIGN in 189 msec 2023-07-24 20:10:56,343 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229456343"}]},"ts":"1690229456343"} 2023-07-24 20:10:56,344 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-24 20:10:56,346 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-24 20:10:56,348 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=137, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 202 msec 2023-07-24 20:10:56,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=137 2023-07-24 20:10:56,451 INFO [Listener at localhost/39801] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 137 completed 2023-07-24 20:10:56,452 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_1766431688 2023-07-24 20:10:56,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_1766431688 2023-07-24 20:10:56,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:56,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1766431688 2023-07-24 20:10:56,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:56,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:56,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-24 20:10:56,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1766431688, current retry=0 2023-07-24 20:10:56,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_1766431688. 2023-07-24 20:10:56,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:56,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:56,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:56,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-24 20:10:56,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 20:10:56,467 INFO [Listener at localhost/39801] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-24 20:10:56,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-24 20:10:56,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:56,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 922 service: MasterService methodName: DisableTable size: 88 connection: 172.31.14.131:60832 deadline: 1690229516467, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-24 20:10:56,469 DEBUG [Listener at localhost/39801] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-24 20:10:56,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-24 20:10:56,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] procedure2.ProcedureExecutor(1029): Stored pid=149, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 20:10:56,473 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=149, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 20:10:56,473 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_1766431688' 2023-07-24 20:10:56,474 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=149, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 20:10:56,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:56,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1766431688 2023-07-24 20:10:56,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:56,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:56,481 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/29dbb36222266d5592bc66a5bba6e4bd 2023-07-24 20:10:56,481 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/d15df88c035f9adf47c3a9543e61bacf 2023-07-24 20:10:56,481 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/794ba90e4d49fd6fee9c0996deb51646 2023-07-24 20:10:56,481 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/a29dfa03b68f1edf73c5acf45c0bf5a2 2023-07-24 20:10:56,481 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/be5392d6e06ecee19e2100a86e5b546e 2023-07-24 20:10:56,484 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/a29dfa03b68f1edf73c5acf45c0bf5a2/f, FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/a29dfa03b68f1edf73c5acf45c0bf5a2/recovered.edits] 2023-07-24 20:10:56,484 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/be5392d6e06ecee19e2100a86e5b546e/f, FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/be5392d6e06ecee19e2100a86e5b546e/recovered.edits] 2023-07-24 20:10:56,484 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/d15df88c035f9adf47c3a9543e61bacf/f, FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/d15df88c035f9adf47c3a9543e61bacf/recovered.edits] 2023-07-24 20:10:56,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=149 2023-07-24 20:10:56,491 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/794ba90e4d49fd6fee9c0996deb51646/f, FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/794ba90e4d49fd6fee9c0996deb51646/recovered.edits] 2023-07-24 20:10:56,491 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/29dbb36222266d5592bc66a5bba6e4bd/f, FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/29dbb36222266d5592bc66a5bba6e4bd/recovered.edits] 2023-07-24 20:10:56,502 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/a29dfa03b68f1edf73c5acf45c0bf5a2/recovered.edits/4.seqid to hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/archive/data/default/Group_testDisabledTableMove/a29dfa03b68f1edf73c5acf45c0bf5a2/recovered.edits/4.seqid 2023-07-24 20:10:56,502 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/794ba90e4d49fd6fee9c0996deb51646/recovered.edits/4.seqid to hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/archive/data/default/Group_testDisabledTableMove/794ba90e4d49fd6fee9c0996deb51646/recovered.edits/4.seqid 2023-07-24 20:10:56,502 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/29dbb36222266d5592bc66a5bba6e4bd/recovered.edits/4.seqid to hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/archive/data/default/Group_testDisabledTableMove/29dbb36222266d5592bc66a5bba6e4bd/recovered.edits/4.seqid 2023-07-24 20:10:56,502 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/be5392d6e06ecee19e2100a86e5b546e/recovered.edits/4.seqid to hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/archive/data/default/Group_testDisabledTableMove/be5392d6e06ecee19e2100a86e5b546e/recovered.edits/4.seqid 2023-07-24 20:10:56,503 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/d15df88c035f9adf47c3a9543e61bacf/recovered.edits/4.seqid to hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/archive/data/default/Group_testDisabledTableMove/d15df88c035f9adf47c3a9543e61bacf/recovered.edits/4.seqid 2023-07-24 20:10:56,503 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/a29dfa03b68f1edf73c5acf45c0bf5a2 2023-07-24 20:10:56,503 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/794ba90e4d49fd6fee9c0996deb51646 2023-07-24 20:10:56,503 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/be5392d6e06ecee19e2100a86e5b546e 2023-07-24 20:10:56,503 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/29dbb36222266d5592bc66a5bba6e4bd 2023-07-24 20:10:56,504 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/.tmp/data/default/Group_testDisabledTableMove/d15df88c035f9adf47c3a9543e61bacf 2023-07-24 20:10:56,504 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-24 20:10:56,506 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=149, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 20:10:56,508 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-24 20:10:56,513 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-24 20:10:56,514 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=149, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 20:10:56,514 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-24 20:10:56,514 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1690229455016.29dbb36222266d5592bc66a5bba6e4bd.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690229456514"}]},"ts":"9223372036854775807"} 2023-07-24 20:10:56,514 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1690229455016.be5392d6e06ecee19e2100a86e5b546e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690229456514"}]},"ts":"9223372036854775807"} 2023-07-24 20:10:56,514 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690229455016.794ba90e4d49fd6fee9c0996deb51646.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690229456514"}]},"ts":"9223372036854775807"} 2023-07-24 20:10:56,514 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690229455016.d15df88c035f9adf47c3a9543e61bacf.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690229456514"}]},"ts":"9223372036854775807"} 2023-07-24 20:10:56,514 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1690229455016.a29dfa03b68f1edf73c5acf45c0bf5a2.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690229456514"}]},"ts":"9223372036854775807"} 2023-07-24 20:10:56,516 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-24 20:10:56,516 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 29dbb36222266d5592bc66a5bba6e4bd, NAME => 'Group_testDisabledTableMove,,1690229455016.29dbb36222266d5592bc66a5bba6e4bd.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => be5392d6e06ecee19e2100a86e5b546e, NAME => 'Group_testDisabledTableMove,aaaaa,1690229455016.be5392d6e06ecee19e2100a86e5b546e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 794ba90e4d49fd6fee9c0996deb51646, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690229455016.794ba90e4d49fd6fee9c0996deb51646.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => d15df88c035f9adf47c3a9543e61bacf, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690229455016.d15df88c035f9adf47c3a9543e61bacf.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => a29dfa03b68f1edf73c5acf45c0bf5a2, NAME => 'Group_testDisabledTableMove,zzzzz,1690229455016.a29dfa03b68f1edf73c5acf45c0bf5a2.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-24 20:10:56,516 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-24 20:10:56,516 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690229456516"}]},"ts":"9223372036854775807"} 2023-07-24 20:10:56,517 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-24 20:10:56,519 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=149, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 20:10:56,520 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=149, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 50 msec 2023-07-24 20:10:56,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(1230): Checking to see if procedure is done pid=149 2023-07-24 20:10:56,589 INFO [Listener at localhost/39801] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 149 completed 2023-07-24 20:10:56,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:56,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:56,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:10:56,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:10:56,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:56,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:36633] to rsgroup default 2023-07-24 20:10:56,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:56,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1766431688 2023-07-24 20:10:56,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:56,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:10:56,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1766431688, current retry=0 2023-07-24 20:10:56,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36167,1690229431901, jenkins-hbase4.apache.org,36633,1690229435886] are moved back to Group_testDisabledTableMove_1766431688 2023-07-24 20:10:56,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_1766431688 => default 2023-07-24 20:10:56,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:56,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_1766431688 2023-07-24 20:10:56,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:56,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:56,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 20:10:56,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:10:56,609 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:10:56,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:10:56,609 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:56,609 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 20:10:56,610 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:56,610 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 20:10:56,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:56,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 20:10:56,616 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:10:56,619 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 20:10:56,623 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 20:10:56,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:56,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:56,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:10:56,633 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:10:56,636 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:56,636 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:56,641 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43005] to rsgroup master 2023-07-24 20:10:56,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:56,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] ipc.CallRunner(144): callId: 956 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60832 deadline: 1690230656641, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. 2023-07-24 20:10:56,641 WARN [Listener at localhost/39801] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 20:10:56,643 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:56,644 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:56,644 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:56,644 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:36633, jenkins-hbase4.apache.org:43675, jenkins-hbase4.apache.org:45007], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 20:10:56,645 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:10:56,645 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:56,670 INFO [Listener at localhost/39801] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=497 (was 495) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_535411941_17 at /127.0.0.1:35702 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3f04a498-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1793085163_17 at /127.0.0.1:40620 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7619449a-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=758 (was 737) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=408 (was 408), ProcessCount=176 (was 176), AvailableMemoryMB=5713 (was 5744) 2023-07-24 20:10:56,691 INFO [Listener at localhost/39801] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=497, OpenFileDescriptor=758, MaxFileDescriptor=60000, SystemLoadAverage=408, ProcessCount=176, AvailableMemoryMB=5712 2023-07-24 20:10:56,691 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-24 20:10:56,696 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:56,696 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:56,697 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:10:56,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:10:56,697 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:10:56,698 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 20:10:56,698 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:10:56,699 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 20:10:56,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:56,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 20:10:56,704 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:10:56,708 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 20:10:56,709 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 20:10:56,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:10:56,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:10:56,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:10:56,715 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:10:56,717 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:56,717 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:56,719 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43005] to rsgroup master 2023-07-24 20:10:56,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:10:56,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] ipc.CallRunner(144): callId: 984 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60832 deadline: 1690230656719, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. 2023-07-24 20:10:56,720 WARN [Listener at localhost/39801] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43005 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 20:10:56,722 INFO [Listener at localhost/39801] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:10:56,722 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:10:56,722 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:10:56,723 INFO [Listener at localhost/39801] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36167, jenkins-hbase4.apache.org:36633, jenkins-hbase4.apache.org:43675, jenkins-hbase4.apache.org:45007], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 20:10:56,724 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:10:56,724 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:10:56,724 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-24 20:10:56,724 INFO [Listener at localhost/39801] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-24 20:10:56,724 DEBUG [Listener at localhost/39801] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1a276cd2 to 127.0.0.1:52127 2023-07-24 20:10:56,725 DEBUG [Listener at localhost/39801] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:10:56,727 DEBUG [Listener at localhost/39801] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-24 20:10:56,727 DEBUG [Listener at localhost/39801] util.JVMClusterUtil(257): Found active master hash=165676551, stopped=false 2023-07-24 20:10:56,727 DEBUG [Listener at localhost/39801] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 20:10:56,728 DEBUG [Listener at localhost/39801] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 20:10:56,728 INFO [Listener at localhost/39801] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,43005,1690229429933 2023-07-24 20:10:56,729 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:43675-0x10198f4d5ba0003, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 20:10:56,729 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:36633-0x10198f4d5ba000b, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 20:10:56,729 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:45007-0x10198f4d5ba0002, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 20:10:56,730 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:36167-0x10198f4d5ba0001, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 20:10:56,730 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 20:10:56,730 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:10:56,730 INFO [Listener at localhost/39801] procedure2.ProcedureExecutor(629): Stopping 2023-07-24 20:10:56,730 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43675-0x10198f4d5ba0003, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 20:10:56,730 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36167-0x10198f4d5ba0001, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 20:10:56,731 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36633-0x10198f4d5ba000b, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 20:10:56,731 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45007-0x10198f4d5ba0002, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 20:10:56,731 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 20:10:56,731 DEBUG [Listener at localhost/39801] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x39ea1c96 to 127.0.0.1:52127 2023-07-24 20:10:56,732 DEBUG [Listener at localhost/39801] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:10:56,732 INFO [Listener at localhost/39801] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36167,1690229431901' ***** 2023-07-24 20:10:56,732 INFO [Listener at localhost/39801] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 20:10:56,732 INFO [RS:0;jenkins-hbase4:36167] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 20:10:56,733 INFO [Listener at localhost/39801] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45007,1690229432026' ***** 2023-07-24 20:10:56,734 INFO [Listener at localhost/39801] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 20:10:56,734 INFO [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 20:10:56,735 INFO [Listener at localhost/39801] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43675,1690229432113' ***** 2023-07-24 20:10:56,735 INFO [Listener at localhost/39801] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 20:10:56,737 INFO [Listener at localhost/39801] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36633,1690229435886' ***** 2023-07-24 20:10:56,739 INFO [Listener at localhost/39801] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 20:10:56,737 INFO [RS:2;jenkins-hbase4:43675] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 20:10:56,741 INFO [RS:3;jenkins-hbase4:36633] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 20:10:56,754 INFO [RS:2;jenkins-hbase4:43675] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@87ade83{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 20:10:56,754 INFO [RS:0;jenkins-hbase4:36167] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@f895154{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 20:10:56,754 INFO [RS:1;jenkins-hbase4:45007] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2e92a8f{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 20:10:56,754 INFO [RS:3;jenkins-hbase4:36633] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@57f5533e{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 20:10:56,760 INFO [RS:0;jenkins-hbase4:36167] server.AbstractConnector(383): Stopped ServerConnector@794438b4{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 20:10:56,760 INFO [RS:3;jenkins-hbase4:36633] server.AbstractConnector(383): Stopped ServerConnector@10063311{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 20:10:56,760 INFO [RS:1;jenkins-hbase4:45007] server.AbstractConnector(383): Stopped ServerConnector@5761f63f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 20:10:56,760 INFO [RS:1;jenkins-hbase4:45007] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 20:10:56,760 INFO [RS:2;jenkins-hbase4:43675] server.AbstractConnector(383): Stopped ServerConnector@26c1a2f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 20:10:56,761 INFO [RS:1;jenkins-hbase4:45007] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6eb4fc00{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 20:10:56,760 INFO [RS:3;jenkins-hbase4:36633] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 20:10:56,760 INFO [RS:0;jenkins-hbase4:36167] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 20:10:56,761 INFO [RS:2;jenkins-hbase4:43675] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 20:10:56,763 INFO [RS:1;jenkins-hbase4:45007] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@28e4d5e5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/hadoop.log.dir/,STOPPED} 2023-07-24 20:10:56,763 INFO [RS:3;jenkins-hbase4:36633] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@70449f05{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 20:10:56,764 INFO [RS:0;jenkins-hbase4:36167] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3b5a29b4{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 20:10:56,764 INFO [RS:3;jenkins-hbase4:36633] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@20ac2297{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/hadoop.log.dir/,STOPPED} 2023-07-24 20:10:56,764 INFO [RS:2;jenkins-hbase4:43675] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7abf9a1c{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 20:10:56,766 INFO [RS:0;jenkins-hbase4:36167] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@c83b084{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/hadoop.log.dir/,STOPPED} 2023-07-24 20:10:56,766 INFO [RS:2;jenkins-hbase4:43675] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3f2f2029{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/hadoop.log.dir/,STOPPED} 2023-07-24 20:10:56,768 INFO [RS:3;jenkins-hbase4:36633] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 20:10:56,769 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 20:10:56,769 INFO [RS:3;jenkins-hbase4:36633] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 20:10:56,769 INFO [RS:0;jenkins-hbase4:36167] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 20:10:56,769 INFO [RS:2;jenkins-hbase4:43675] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 20:10:56,769 INFO [RS:3;jenkins-hbase4:36633] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 20:10:56,769 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 20:10:56,769 INFO [RS:2;jenkins-hbase4:43675] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 20:10:56,769 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 20:10:56,769 INFO [RS:1;jenkins-hbase4:45007] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 20:10:56,769 INFO [RS:0;jenkins-hbase4:36167] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 20:10:56,770 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 20:10:56,770 INFO [RS:0;jenkins-hbase4:36167] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 20:10:56,770 INFO [RS:0;jenkins-hbase4:36167] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:56,770 DEBUG [RS:0;jenkins-hbase4:36167] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3d1ba26a to 127.0.0.1:52127 2023-07-24 20:10:56,770 DEBUG [RS:0;jenkins-hbase4:36167] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:10:56,770 INFO [RS:0;jenkins-hbase4:36167] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36167,1690229431901; all regions closed. 2023-07-24 20:10:56,770 INFO [RS:1;jenkins-hbase4:45007] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 20:10:56,770 INFO [RS:1;jenkins-hbase4:45007] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 20:10:56,769 INFO [RS:2;jenkins-hbase4:43675] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 20:10:56,771 INFO [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer(3305): Received CLOSE for 035aaee8d7dd517f8d7815de60a3d0d1 2023-07-24 20:10:56,771 INFO [RS:2;jenkins-hbase4:43675] regionserver.HRegionServer(3305): Received CLOSE for fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:56,769 INFO [RS:3;jenkins-hbase4:36633] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:56,771 DEBUG [RS:3;jenkins-hbase4:36633] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x02024450 to 127.0.0.1:52127 2023-07-24 20:10:56,771 DEBUG [RS:3;jenkins-hbase4:36633] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:10:56,772 INFO [RS:3;jenkins-hbase4:36633] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36633,1690229435886; all regions closed. 2023-07-24 20:10:56,772 INFO [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer(3305): Received CLOSE for fa4520efe7f6261623ea4e6234947d9e 2023-07-24 20:10:56,773 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fb389637a88cedef2b8738060d31ce38, disabling compactions & flushes 2023-07-24 20:10:56,773 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. 2023-07-24 20:10:56,773 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. 2023-07-24 20:10:56,773 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. after waiting 0 ms 2023-07-24 20:10:56,773 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. 2023-07-24 20:10:56,773 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 035aaee8d7dd517f8d7815de60a3d0d1, disabling compactions & flushes 2023-07-24 20:10:56,773 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. 2023-07-24 20:10:56,773 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. 2023-07-24 20:10:56,773 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. after waiting 0 ms 2023-07-24 20:10:56,773 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. 2023-07-24 20:10:56,772 INFO [RS:2;jenkins-hbase4:43675] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:56,773 INFO [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer(3305): Received CLOSE for 05a513489858765cac84f66c0c371109 2023-07-24 20:10:56,774 DEBUG [RS:2;jenkins-hbase4:43675] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x34d28169 to 127.0.0.1:52127 2023-07-24 20:10:56,774 INFO [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:56,774 DEBUG [RS:2;jenkins-hbase4:43675] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:10:56,774 DEBUG [RS:1;jenkins-hbase4:45007] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x696e3fda to 127.0.0.1:52127 2023-07-24 20:10:56,774 INFO [RS:2;jenkins-hbase4:43675] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-24 20:10:56,774 DEBUG [RS:2;jenkins-hbase4:43675] regionserver.HRegionServer(1478): Online Regions={fb389637a88cedef2b8738060d31ce38=testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38.} 2023-07-24 20:10:56,774 DEBUG [RS:1;jenkins-hbase4:45007] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:10:56,775 INFO [RS:1;jenkins-hbase4:45007] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 20:10:56,775 INFO [RS:1;jenkins-hbase4:45007] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 20:10:56,775 INFO [RS:1;jenkins-hbase4:45007] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 20:10:56,775 INFO [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-24 20:10:56,775 DEBUG [RS:2;jenkins-hbase4:43675] regionserver.HRegionServer(1504): Waiting on fb389637a88cedef2b8738060d31ce38 2023-07-24 20:10:56,783 INFO [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-24 20:10:56,783 DEBUG [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer(1478): Online Regions={035aaee8d7dd517f8d7815de60a3d0d1=unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1., 1588230740=hbase:meta,,1.1588230740, fa4520efe7f6261623ea4e6234947d9e=hbase:rsgroup,,1690229434920.fa4520efe7f6261623ea4e6234947d9e., 05a513489858765cac84f66c0c371109=hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109.} 2023-07-24 20:10:56,783 DEBUG [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer(1504): Waiting on 035aaee8d7dd517f8d7815de60a3d0d1, 05a513489858765cac84f66c0c371109, 1588230740, fa4520efe7f6261623ea4e6234947d9e 2023-07-24 20:10:56,783 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 20:10:56,784 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 20:10:56,784 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 20:10:56,784 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 20:10:56,784 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 20:10:56,784 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=77.94 KB heapSize=122.88 KB 2023-07-24 20:10:56,787 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/testRename/fb389637a88cedef2b8738060d31ce38/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-24 20:10:56,790 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/default/unmovedTable/035aaee8d7dd517f8d7815de60a3d0d1/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-24 20:10:56,796 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. 2023-07-24 20:10:56,796 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 035aaee8d7dd517f8d7815de60a3d0d1: 2023-07-24 20:10:56,796 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1690229451053.035aaee8d7dd517f8d7815de60a3d0d1. 2023-07-24 20:10:56,796 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fa4520efe7f6261623ea4e6234947d9e, disabling compactions & flushes 2023-07-24 20:10:56,796 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690229434920.fa4520efe7f6261623ea4e6234947d9e. 2023-07-24 20:10:56,796 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690229434920.fa4520efe7f6261623ea4e6234947d9e. 2023-07-24 20:10:56,796 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690229434920.fa4520efe7f6261623ea4e6234947d9e. after waiting 0 ms 2023-07-24 20:10:56,797 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690229434920.fa4520efe7f6261623ea4e6234947d9e. 2023-07-24 20:10:56,797 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing fa4520efe7f6261623ea4e6234947d9e 1/1 column families, dataSize=28.44 KB heapSize=46.80 KB 2023-07-24 20:10:56,798 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. 2023-07-24 20:10:56,798 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fb389637a88cedef2b8738060d31ce38: 2023-07-24 20:10:56,798 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1690229449383.fb389637a88cedef2b8738060d31ce38. 2023-07-24 20:10:56,802 DEBUG [RS:0;jenkins-hbase4:36167] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/oldWALs 2023-07-24 20:10:56,803 INFO [RS:0;jenkins-hbase4:36167] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36167%2C1690229431901:(num 1690229434239) 2023-07-24 20:10:56,803 DEBUG [RS:0;jenkins-hbase4:36167] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:10:56,803 INFO [RS:0;jenkins-hbase4:36167] regionserver.LeaseManager(133): Closed leases 2023-07-24 20:10:56,803 INFO [RS:0;jenkins-hbase4:36167] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 20:10:56,804 INFO [RS:0;jenkins-hbase4:36167] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 20:10:56,804 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 20:10:56,804 INFO [RS:0;jenkins-hbase4:36167] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 20:10:56,804 INFO [RS:0;jenkins-hbase4:36167] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 20:10:56,812 DEBUG [RS:3;jenkins-hbase4:36633] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/oldWALs 2023-07-24 20:10:56,812 INFO [RS:3;jenkins-hbase4:36633] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36633%2C1690229435886:(num 1690229436175) 2023-07-24 20:10:56,812 DEBUG [RS:3;jenkins-hbase4:36633] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:10:56,812 INFO [RS:3;jenkins-hbase4:36633] regionserver.LeaseManager(133): Closed leases 2023-07-24 20:10:56,813 INFO [RS:0;jenkins-hbase4:36167] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36167 2023-07-24 20:10:56,823 INFO [RS:3;jenkins-hbase4:36633] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 20:10:56,830 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 20:10:56,830 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 20:10:56,831 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 20:10:56,830 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 20:10:56,831 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 20:10:56,836 INFO [RS:3;jenkins-hbase4:36633] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 20:10:56,837 INFO [RS:3;jenkins-hbase4:36633] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 20:10:56,837 INFO [RS:3;jenkins-hbase4:36633] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 20:10:56,848 INFO [RS:3;jenkins-hbase4:36633] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36633 2023-07-24 20:10:56,851 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=71.95 KB at sequenceid=200 (bloomFilter=false), to=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740/.tmp/info/1ccbeb9bdbe142608ffb95a856e9001b 2023-07-24 20:10:56,863 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=28.44 KB at sequenceid=95 (bloomFilter=true), to=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/rsgroup/fa4520efe7f6261623ea4e6234947d9e/.tmp/m/14b4c0f1062d400c9341b169330af8b4 2023-07-24 20:10:56,872 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 14b4c0f1062d400c9341b169330af8b4 2023-07-24 20:10:56,874 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/rsgroup/fa4520efe7f6261623ea4e6234947d9e/.tmp/m/14b4c0f1062d400c9341b169330af8b4 as hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/rsgroup/fa4520efe7f6261623ea4e6234947d9e/m/14b4c0f1062d400c9341b169330af8b4 2023-07-24 20:10:56,877 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1ccbeb9bdbe142608ffb95a856e9001b 2023-07-24 20:10:56,878 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:45007-0x10198f4d5ba0002, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:56,878 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:36633-0x10198f4d5ba000b, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:56,878 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:10:56,878 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:36633-0x10198f4d5ba000b, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:10:56,879 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:36633-0x10198f4d5ba000b, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:56,878 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:45007-0x10198f4d5ba0002, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:10:56,879 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:45007-0x10198f4d5ba0002, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:56,878 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:43675-0x10198f4d5ba0003, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:56,879 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:43675-0x10198f4d5ba0003, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:10:56,879 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:43675-0x10198f4d5ba0003, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:56,879 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:36167-0x10198f4d5ba0001, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36167,1690229431901 2023-07-24 20:10:56,879 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:36167-0x10198f4d5ba0001, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:10:56,879 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:36167-0x10198f4d5ba0001, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36633,1690229435886 2023-07-24 20:10:56,886 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 14b4c0f1062d400c9341b169330af8b4 2023-07-24 20:10:56,887 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/rsgroup/fa4520efe7f6261623ea4e6234947d9e/m/14b4c0f1062d400c9341b169330af8b4, entries=28, sequenceid=95, filesize=6.1 K 2023-07-24 20:10:56,889 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~28.44 KB/29124, heapSize ~46.78 KB/47904, currentSize=0 B/0 for fa4520efe7f6261623ea4e6234947d9e in 92ms, sequenceid=95, compaction requested=false 2023-07-24 20:10:56,943 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/rsgroup/fa4520efe7f6261623ea4e6234947d9e/recovered.edits/98.seqid, newMaxSeqId=98, maxSeqId=1 2023-07-24 20:10:56,944 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 20:10:56,945 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690229434920.fa4520efe7f6261623ea4e6234947d9e. 2023-07-24 20:10:56,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fa4520efe7f6261623ea4e6234947d9e: 2023-07-24 20:10:56,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690229434920.fa4520efe7f6261623ea4e6234947d9e. 2023-07-24 20:10:56,946 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 05a513489858765cac84f66c0c371109, disabling compactions & flushes 2023-07-24 20:10:56,946 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109. 2023-07-24 20:10:56,946 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109. 2023-07-24 20:10:56,946 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109. after waiting 0 ms 2023-07-24 20:10:56,946 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109. 2023-07-24 20:10:56,954 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2 KB at sequenceid=200 (bloomFilter=false), to=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740/.tmp/rep_barrier/9d69c2b091474d9f9475e5621d631559 2023-07-24 20:10:56,955 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/namespace/05a513489858765cac84f66c0c371109/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-24 20:10:56,957 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109. 2023-07-24 20:10:56,957 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 05a513489858765cac84f66c0c371109: 2023-07-24 20:10:56,957 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690229434847.05a513489858765cac84f66c0c371109. 2023-07-24 20:10:56,963 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9d69c2b091474d9f9475e5621d631559 2023-07-24 20:10:56,975 INFO [RS:2;jenkins-hbase4:43675] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43675,1690229432113; all regions closed. 2023-07-24 20:10:56,978 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36167,1690229431901] 2023-07-24 20:10:56,978 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36167,1690229431901; numProcessing=1 2023-07-24 20:10:56,983 DEBUG [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-24 20:10:56,993 DEBUG [RS:2;jenkins-hbase4:43675] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/oldWALs 2023-07-24 20:10:56,993 INFO [RS:2;jenkins-hbase4:43675] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43675%2C1690229432113:(num 1690229434290) 2023-07-24 20:10:56,993 DEBUG [RS:2;jenkins-hbase4:43675] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:10:56,994 INFO [RS:2;jenkins-hbase4:43675] regionserver.LeaseManager(133): Closed leases 2023-07-24 20:10:56,994 INFO [RS:2;jenkins-hbase4:43675] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 20:10:56,994 INFO [RS:2;jenkins-hbase4:43675] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 20:10:56,994 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 20:10:56,994 INFO [RS:2;jenkins-hbase4:43675] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 20:10:56,994 INFO [RS:2;jenkins-hbase4:43675] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 20:10:56,999 INFO [RS:2;jenkins-hbase4:43675] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43675 2023-07-24 20:10:57,002 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.99 KB at sequenceid=200 (bloomFilter=false), to=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740/.tmp/table/a90faf58ae7249d0be3b87bcccd186c9 2023-07-24 20:10:57,012 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a90faf58ae7249d0be3b87bcccd186c9 2023-07-24 20:10:57,013 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740/.tmp/info/1ccbeb9bdbe142608ffb95a856e9001b as hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740/info/1ccbeb9bdbe142608ffb95a856e9001b 2023-07-24 20:10:57,022 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1ccbeb9bdbe142608ffb95a856e9001b 2023-07-24 20:10:57,022 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740/info/1ccbeb9bdbe142608ffb95a856e9001b, entries=97, sequenceid=200, filesize=16.0 K 2023-07-24 20:10:57,023 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740/.tmp/rep_barrier/9d69c2b091474d9f9475e5621d631559 as hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740/rep_barrier/9d69c2b091474d9f9475e5621d631559 2023-07-24 20:10:57,031 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9d69c2b091474d9f9475e5621d631559 2023-07-24 20:10:57,031 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740/rep_barrier/9d69c2b091474d9f9475e5621d631559, entries=18, sequenceid=200, filesize=6.9 K 2023-07-24 20:10:57,032 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740/.tmp/table/a90faf58ae7249d0be3b87bcccd186c9 as hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740/table/a90faf58ae7249d0be3b87bcccd186c9 2023-07-24 20:10:57,039 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a90faf58ae7249d0be3b87bcccd186c9 2023-07-24 20:10:57,039 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740/table/a90faf58ae7249d0be3b87bcccd186c9, entries=31, sequenceid=200, filesize=7.4 K 2023-07-24 20:10:57,040 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~77.94 KB/79807, heapSize ~122.83 KB/125776, currentSize=0 B/0 for 1588230740 in 256ms, sequenceid=200, compaction requested=false 2023-07-24 20:10:57,063 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/data/hbase/meta/1588230740/recovered.edits/203.seqid, newMaxSeqId=203, maxSeqId=1 2023-07-24 20:10:57,064 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 20:10:57,064 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 20:10:57,065 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 20:10:57,065 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-24 20:10:57,078 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:36633-0x10198f4d5ba000b, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:10:57,078 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:36633-0x10198f4d5ba000b, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:10:57,078 INFO [RS:3;jenkins-hbase4:36633] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36633,1690229435886; zookeeper connection closed. 2023-07-24 20:10:57,078 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5c04a4c2] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5c04a4c2 2023-07-24 20:10:57,079 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36167,1690229431901 already deleted, retry=false 2023-07-24 20:10:57,079 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,36167,1690229431901 expired; onlineServers=3 2023-07-24 20:10:57,079 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36633,1690229435886] 2023-07-24 20:10:57,080 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36633,1690229435886; numProcessing=2 2023-07-24 20:10:57,080 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:45007-0x10198f4d5ba0002, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:57,080 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:43675-0x10198f4d5ba0003, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43675,1690229432113 2023-07-24 20:10:57,080 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:10:57,143 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-24 20:10:57,143 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-24 20:10:57,178 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:36167-0x10198f4d5ba0001, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:10:57,178 INFO [RS:0;jenkins-hbase4:36167] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36167,1690229431901; zookeeper connection closed. 2023-07-24 20:10:57,178 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:36167-0x10198f4d5ba0001, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:10:57,178 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@20636416] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@20636416 2023-07-24 20:10:57,180 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36633,1690229435886 already deleted, retry=false 2023-07-24 20:10:57,180 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,36633,1690229435886 expired; onlineServers=2 2023-07-24 20:10:57,181 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43675,1690229432113] 2023-07-24 20:10:57,181 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43675,1690229432113; numProcessing=3 2023-07-24 20:10:57,182 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43675,1690229432113 already deleted, retry=false 2023-07-24 20:10:57,182 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43675,1690229432113 expired; onlineServers=1 2023-07-24 20:10:57,184 INFO [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45007,1690229432026; all regions closed. 2023-07-24 20:10:57,190 DEBUG [RS:1;jenkins-hbase4:45007] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/oldWALs 2023-07-24 20:10:57,190 INFO [RS:1;jenkins-hbase4:45007] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45007%2C1690229432026.meta:.meta(num 1690229434567) 2023-07-24 20:10:57,198 DEBUG [RS:1;jenkins-hbase4:45007] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/oldWALs 2023-07-24 20:10:57,198 INFO [RS:1;jenkins-hbase4:45007] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45007%2C1690229432026:(num 1690229434285) 2023-07-24 20:10:57,198 DEBUG [RS:1;jenkins-hbase4:45007] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:10:57,198 INFO [RS:1;jenkins-hbase4:45007] regionserver.LeaseManager(133): Closed leases 2023-07-24 20:10:57,199 INFO [RS:1;jenkins-hbase4:45007] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 20:10:57,199 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 20:10:57,200 INFO [RS:1;jenkins-hbase4:45007] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45007 2023-07-24 20:10:57,205 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:45007-0x10198f4d5ba0002, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45007,1690229432026 2023-07-24 20:10:57,205 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:10:57,206 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,45007,1690229432026] 2023-07-24 20:10:57,206 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,45007,1690229432026; numProcessing=4 2023-07-24 20:10:57,209 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,45007,1690229432026 already deleted, retry=false 2023-07-24 20:10:57,209 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,45007,1690229432026 expired; onlineServers=0 2023-07-24 20:10:57,209 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43005,1690229429933' ***** 2023-07-24 20:10:57,209 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-24 20:10:57,210 DEBUG [M:0;jenkins-hbase4:43005] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@48dd0c3c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 20:10:57,210 INFO [M:0;jenkins-hbase4:43005] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 20:10:57,211 INFO [M:0;jenkins-hbase4:43005] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@60d47226{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-24 20:10:57,212 INFO [M:0;jenkins-hbase4:43005] server.AbstractConnector(383): Stopped ServerConnector@7ab055ba{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 20:10:57,212 INFO [M:0;jenkins-hbase4:43005] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 20:10:57,212 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-24 20:10:57,213 INFO [M:0;jenkins-hbase4:43005] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@69d623f{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 20:10:57,213 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:10:57,213 INFO [M:0;jenkins-hbase4:43005] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@40c11d31{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/hadoop.log.dir/,STOPPED} 2023-07-24 20:10:57,213 INFO [M:0;jenkins-hbase4:43005] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43005,1690229429933 2023-07-24 20:10:57,213 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 20:10:57,214 INFO [M:0;jenkins-hbase4:43005] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43005,1690229429933; all regions closed. 2023-07-24 20:10:57,214 DEBUG [M:0;jenkins-hbase4:43005] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:10:57,214 INFO [M:0;jenkins-hbase4:43005] master.HMaster(1491): Stopping master jetty server 2023-07-24 20:10:57,214 INFO [M:0;jenkins-hbase4:43005] server.AbstractConnector(383): Stopped ServerConnector@6a0c70ce{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 20:10:57,215 DEBUG [M:0;jenkins-hbase4:43005] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-24 20:10:57,215 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-24 20:10:57,215 DEBUG [M:0;jenkins-hbase4:43005] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-24 20:10:57,215 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690229433821] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690229433821,5,FailOnTimeoutGroup] 2023-07-24 20:10:57,215 INFO [M:0;jenkins-hbase4:43005] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-24 20:10:57,215 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690229433824] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690229433824,5,FailOnTimeoutGroup] 2023-07-24 20:10:57,215 INFO [M:0;jenkins-hbase4:43005] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-24 20:10:57,215 INFO [M:0;jenkins-hbase4:43005] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-24 20:10:57,215 DEBUG [M:0;jenkins-hbase4:43005] master.HMaster(1512): Stopping service threads 2023-07-24 20:10:57,216 INFO [M:0;jenkins-hbase4:43005] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-24 20:10:57,216 ERROR [M:0;jenkins-hbase4:43005] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-24 20:10:57,216 INFO [M:0;jenkins-hbase4:43005] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-24 20:10:57,216 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-24 20:10:57,217 DEBUG [M:0;jenkins-hbase4:43005] zookeeper.ZKUtil(398): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-24 20:10:57,217 WARN [M:0;jenkins-hbase4:43005] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-24 20:10:57,217 INFO [M:0;jenkins-hbase4:43005] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-24 20:10:57,217 INFO [M:0;jenkins-hbase4:43005] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-24 20:10:57,217 DEBUG [M:0;jenkins-hbase4:43005] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 20:10:57,217 INFO [M:0;jenkins-hbase4:43005] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 20:10:57,217 DEBUG [M:0;jenkins-hbase4:43005] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 20:10:57,217 DEBUG [M:0;jenkins-hbase4:43005] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 20:10:57,217 DEBUG [M:0;jenkins-hbase4:43005] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 20:10:57,217 INFO [M:0;jenkins-hbase4:43005] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=499.89 KB heapSize=597.86 KB 2023-07-24 20:10:57,280 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:43675-0x10198f4d5ba0003, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:10:57,280 INFO [RS:2;jenkins-hbase4:43675] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43675,1690229432113; zookeeper connection closed. 2023-07-24 20:10:57,280 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:43675-0x10198f4d5ba0003, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:10:57,281 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5bdd5f27] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5bdd5f27 2023-07-24 20:10:57,381 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:45007-0x10198f4d5ba0002, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:10:57,381 INFO [RS:1;jenkins-hbase4:45007] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45007,1690229432026; zookeeper connection closed. 2023-07-24 20:10:57,381 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): regionserver:45007-0x10198f4d5ba0002, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:10:57,390 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3cd3df0f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3cd3df0f 2023-07-24 20:10:57,391 INFO [Listener at localhost/39801] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-24 20:10:57,640 INFO [M:0;jenkins-hbase4:43005] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=499.89 KB at sequenceid=1104 (bloomFilter=true), to=hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/83998331f940434e86d0070f3e022af3 2023-07-24 20:10:57,646 DEBUG [M:0;jenkins-hbase4:43005] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/83998331f940434e86d0070f3e022af3 as hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/83998331f940434e86d0070f3e022af3 2023-07-24 20:10:57,651 INFO [M:0;jenkins-hbase4:43005] regionserver.HStore(1080): Added hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/83998331f940434e86d0070f3e022af3, entries=148, sequenceid=1104, filesize=26.2 K 2023-07-24 20:10:57,652 INFO [M:0;jenkins-hbase4:43005] regionserver.HRegion(2948): Finished flush of dataSize ~499.89 KB/511886, heapSize ~597.84 KB/612192, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 435ms, sequenceid=1104, compaction requested=false 2023-07-24 20:10:57,654 INFO [M:0;jenkins-hbase4:43005] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 20:10:57,654 DEBUG [M:0;jenkins-hbase4:43005] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 20:10:57,657 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 20:10:57,657 INFO [M:0;jenkins-hbase4:43005] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-24 20:10:57,657 INFO [M:0;jenkins-hbase4:43005] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43005 2023-07-24 20:10:57,659 DEBUG [M:0;jenkins-hbase4:43005] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,43005,1690229429933 already deleted, retry=false 2023-07-24 20:10:57,760 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:10:57,760 INFO [M:0;jenkins-hbase4:43005] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43005,1690229429933; zookeeper connection closed. 2023-07-24 20:10:57,760 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): master:43005-0x10198f4d5ba0000, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:10:57,762 WARN [Listener at localhost/39801] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 20:10:57,767 INFO [Listener at localhost/39801] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 20:10:57,870 WARN [BP-336442049-172.31.14.131-1690229426124 heartbeating to localhost/127.0.0.1:32791] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 20:10:57,870 WARN [BP-336442049-172.31.14.131-1690229426124 heartbeating to localhost/127.0.0.1:32791] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-336442049-172.31.14.131-1690229426124 (Datanode Uuid 95ebb679-b89d-4261-bbcf-547852e4613b) service to localhost/127.0.0.1:32791 2023-07-24 20:10:57,872 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/cluster_c9a4a1de-e8c8-db94-f8a1-60d638b30590/dfs/data/data5/current/BP-336442049-172.31.14.131-1690229426124] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 20:10:57,872 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/cluster_c9a4a1de-e8c8-db94-f8a1-60d638b30590/dfs/data/data6/current/BP-336442049-172.31.14.131-1690229426124] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 20:10:57,874 WARN [Listener at localhost/39801] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 20:10:57,877 INFO [Listener at localhost/39801] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 20:10:57,979 WARN [BP-336442049-172.31.14.131-1690229426124 heartbeating to localhost/127.0.0.1:32791] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 20:10:57,979 WARN [BP-336442049-172.31.14.131-1690229426124 heartbeating to localhost/127.0.0.1:32791] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-336442049-172.31.14.131-1690229426124 (Datanode Uuid f9613ce8-8e43-460f-a070-d19e8d6e1e0b) service to localhost/127.0.0.1:32791 2023-07-24 20:10:57,980 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/cluster_c9a4a1de-e8c8-db94-f8a1-60d638b30590/dfs/data/data3/current/BP-336442049-172.31.14.131-1690229426124] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 20:10:57,980 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/cluster_c9a4a1de-e8c8-db94-f8a1-60d638b30590/dfs/data/data4/current/BP-336442049-172.31.14.131-1690229426124] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 20:10:57,981 WARN [Listener at localhost/39801] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 20:10:57,989 INFO [Listener at localhost/39801] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 20:10:58,092 WARN [BP-336442049-172.31.14.131-1690229426124 heartbeating to localhost/127.0.0.1:32791] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 20:10:58,092 WARN [BP-336442049-172.31.14.131-1690229426124 heartbeating to localhost/127.0.0.1:32791] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-336442049-172.31.14.131-1690229426124 (Datanode Uuid 03c59458-cfbb-467e-87d9-72ed36a77cf2) service to localhost/127.0.0.1:32791 2023-07-24 20:10:58,093 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/cluster_c9a4a1de-e8c8-db94-f8a1-60d638b30590/dfs/data/data1/current/BP-336442049-172.31.14.131-1690229426124] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 20:10:58,093 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/cluster_c9a4a1de-e8c8-db94-f8a1-60d638b30590/dfs/data/data2/current/BP-336442049-172.31.14.131-1690229426124] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 20:10:58,120 INFO [Listener at localhost/39801] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 20:10:58,239 INFO [Listener at localhost/39801] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-24 20:10:58,287 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-24 20:10:58,287 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-24 20:10:58,288 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/hadoop.log.dir so I do NOT create it in target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799 2023-07-24 20:10:58,288 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95b4744c-05de-48b0-3111-991c96228af6/hadoop.tmp.dir so I do NOT create it in target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799 2023-07-24 20:10:58,288 INFO [Listener at localhost/39801] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/cluster_89b28d99-38e6-d0af-6c21-38d3021d86a4, deleteOnExit=true 2023-07-24 20:10:58,288 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-24 20:10:58,288 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/test.cache.data in system properties and HBase conf 2023-07-24 20:10:58,288 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/hadoop.tmp.dir in system properties and HBase conf 2023-07-24 20:10:58,288 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/hadoop.log.dir in system properties and HBase conf 2023-07-24 20:10:58,288 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-24 20:10:58,288 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-24 20:10:58,288 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-24 20:10:58,289 DEBUG [Listener at localhost/39801] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-24 20:10:58,289 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-24 20:10:58,289 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-24 20:10:58,289 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-24 20:10:58,289 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 20:10:58,289 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-24 20:10:58,289 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-24 20:10:58,289 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 20:10:58,289 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 20:10:58,290 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-24 20:10:58,290 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/nfs.dump.dir in system properties and HBase conf 2023-07-24 20:10:58,290 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/java.io.tmpdir in system properties and HBase conf 2023-07-24 20:10:58,290 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 20:10:58,290 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-24 20:10:58,290 INFO [Listener at localhost/39801] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-24 20:10:58,294 WARN [Listener at localhost/39801] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 20:10:58,294 WARN [Listener at localhost/39801] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 20:10:58,336 DEBUG [Listener at localhost/39801-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x10198f4d5ba000a, quorum=127.0.0.1:52127, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-24 20:10:58,337 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x10198f4d5ba000a, quorum=127.0.0.1:52127, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-24 20:10:58,338 WARN [Listener at localhost/39801] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 20:10:58,340 INFO [Listener at localhost/39801] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 20:10:58,346 INFO [Listener at localhost/39801] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/java.io.tmpdir/Jetty_localhost_40491_hdfs____8ln7qy/webapp 2023-07-24 20:10:58,440 INFO [Listener at localhost/39801] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40491 2023-07-24 20:10:58,446 WARN [Listener at localhost/39801] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 20:10:58,446 WARN [Listener at localhost/39801] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 20:10:58,498 WARN [Listener at localhost/40043] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 20:10:58,514 WARN [Listener at localhost/40043] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 20:10:58,517 WARN [Listener at localhost/40043] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 20:10:58,518 INFO [Listener at localhost/40043] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 20:10:58,524 INFO [Listener at localhost/40043] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/java.io.tmpdir/Jetty_localhost_33379_datanode____922li4/webapp 2023-07-24 20:10:58,617 INFO [Listener at localhost/40043] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33379 2023-07-24 20:10:58,623 WARN [Listener at localhost/42835] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 20:10:58,638 WARN [Listener at localhost/42835] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 20:10:58,639 WARN [Listener at localhost/42835] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 20:10:58,640 INFO [Listener at localhost/42835] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 20:10:58,644 INFO [Listener at localhost/42835] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/java.io.tmpdir/Jetty_localhost_41585_datanode____.wtnih0/webapp 2023-07-24 20:10:58,755 INFO [Listener at localhost/42835] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41585 2023-07-24 20:10:58,767 WARN [Listener at localhost/36107] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 20:10:58,780 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2a832d3b2f270972: Processing first storage report for DS-a473fda0-70eb-4ad4-8023-e583bb4c6b17 from datanode 473c608b-83c5-4a43-8f6f-aeb3de0599ac 2023-07-24 20:10:58,781 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2a832d3b2f270972: from storage DS-a473fda0-70eb-4ad4-8023-e583bb4c6b17 node DatanodeRegistration(127.0.0.1:39697, datanodeUuid=473c608b-83c5-4a43-8f6f-aeb3de0599ac, infoPort=45071, infoSecurePort=0, ipcPort=42835, storageInfo=lv=-57;cid=testClusterID;nsid=907166242;c=1690229458297), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 20:10:58,781 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2a832d3b2f270972: Processing first storage report for DS-2320a22d-ce67-406e-bcfc-03b37eb5a60f from datanode 473c608b-83c5-4a43-8f6f-aeb3de0599ac 2023-07-24 20:10:58,781 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2a832d3b2f270972: from storage DS-2320a22d-ce67-406e-bcfc-03b37eb5a60f node DatanodeRegistration(127.0.0.1:39697, datanodeUuid=473c608b-83c5-4a43-8f6f-aeb3de0599ac, infoPort=45071, infoSecurePort=0, ipcPort=42835, storageInfo=lv=-57;cid=testClusterID;nsid=907166242;c=1690229458297), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 20:10:58,794 WARN [Listener at localhost/36107] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 20:10:58,796 WARN [Listener at localhost/36107] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 20:10:58,797 INFO [Listener at localhost/36107] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 20:10:58,805 INFO [Listener at localhost/36107] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/java.io.tmpdir/Jetty_localhost_38899_datanode____3qyj3u/webapp 2023-07-24 20:10:58,883 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc8b386a3464d010d: Processing first storage report for DS-f14ec800-3e52-411a-9a3d-5e242c57be58 from datanode 1acf5df3-252d-4e9d-8302-c699363d9bd3 2023-07-24 20:10:58,883 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc8b386a3464d010d: from storage DS-f14ec800-3e52-411a-9a3d-5e242c57be58 node DatanodeRegistration(127.0.0.1:45801, datanodeUuid=1acf5df3-252d-4e9d-8302-c699363d9bd3, infoPort=40843, infoSecurePort=0, ipcPort=36107, storageInfo=lv=-57;cid=testClusterID;nsid=907166242;c=1690229458297), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 20:10:58,884 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc8b386a3464d010d: Processing first storage report for DS-2d03b4a4-e9d2-4953-8dc4-f9171f6ddc41 from datanode 1acf5df3-252d-4e9d-8302-c699363d9bd3 2023-07-24 20:10:58,884 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc8b386a3464d010d: from storage DS-2d03b4a4-e9d2-4953-8dc4-f9171f6ddc41 node DatanodeRegistration(127.0.0.1:45801, datanodeUuid=1acf5df3-252d-4e9d-8302-c699363d9bd3, infoPort=40843, infoSecurePort=0, ipcPort=36107, storageInfo=lv=-57;cid=testClusterID;nsid=907166242;c=1690229458297), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 20:10:58,925 INFO [Listener at localhost/36107] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38899 2023-07-24 20:10:58,933 WARN [Listener at localhost/33155] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 20:10:59,059 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4ba70d542641b624: Processing first storage report for DS-7390ff1b-060a-43d8-af8b-85103d48df07 from datanode 122e2113-359c-4528-a04a-4ba857e45fcf 2023-07-24 20:10:59,059 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4ba70d542641b624: from storage DS-7390ff1b-060a-43d8-af8b-85103d48df07 node DatanodeRegistration(127.0.0.1:42709, datanodeUuid=122e2113-359c-4528-a04a-4ba857e45fcf, infoPort=40029, infoSecurePort=0, ipcPort=33155, storageInfo=lv=-57;cid=testClusterID;nsid=907166242;c=1690229458297), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 20:10:59,059 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4ba70d542641b624: Processing first storage report for DS-1aca16cf-c067-4c6b-931b-1293bbfd580e from datanode 122e2113-359c-4528-a04a-4ba857e45fcf 2023-07-24 20:10:59,059 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4ba70d542641b624: from storage DS-1aca16cf-c067-4c6b-931b-1293bbfd580e node DatanodeRegistration(127.0.0.1:42709, datanodeUuid=122e2113-359c-4528-a04a-4ba857e45fcf, infoPort=40029, infoSecurePort=0, ipcPort=33155, storageInfo=lv=-57;cid=testClusterID;nsid=907166242;c=1690229458297), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 20:10:59,145 DEBUG [Listener at localhost/33155] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799 2023-07-24 20:10:59,148 INFO [Listener at localhost/33155] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/cluster_89b28d99-38e6-d0af-6c21-38d3021d86a4/zookeeper_0, clientPort=56754, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/cluster_89b28d99-38e6-d0af-6c21-38d3021d86a4/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/cluster_89b28d99-38e6-d0af-6c21-38d3021d86a4/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-24 20:10:59,149 INFO [Listener at localhost/33155] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=56754 2023-07-24 20:10:59,150 INFO [Listener at localhost/33155] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:10:59,151 INFO [Listener at localhost/33155] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:10:59,171 INFO [Listener at localhost/33155] util.FSUtils(471): Created version file at hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423 with version=8 2023-07-24 20:10:59,171 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/hbase-staging 2023-07-24 20:10:59,172 DEBUG [Listener at localhost/33155] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-24 20:10:59,172 DEBUG [Listener at localhost/33155] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-24 20:10:59,172 DEBUG [Listener at localhost/33155] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-24 20:10:59,172 DEBUG [Listener at localhost/33155] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-24 20:10:59,173 INFO [Listener at localhost/33155] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 20:10:59,173 INFO [Listener at localhost/33155] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:59,173 INFO [Listener at localhost/33155] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:59,174 INFO [Listener at localhost/33155] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 20:10:59,174 INFO [Listener at localhost/33155] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:59,174 INFO [Listener at localhost/33155] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 20:10:59,174 INFO [Listener at localhost/33155] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 20:10:59,176 INFO [Listener at localhost/33155] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45907 2023-07-24 20:10:59,177 INFO [Listener at localhost/33155] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:10:59,178 INFO [Listener at localhost/33155] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:10:59,180 INFO [Listener at localhost/33155] zookeeper.RecoverableZooKeeper(93): Process identifier=master:45907 connecting to ZooKeeper ensemble=127.0.0.1:56754 2023-07-24 20:10:59,190 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:459070x0, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 20:10:59,192 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:45907-0x10198f54bce0000 connected 2023-07-24 20:10:59,211 DEBUG [Listener at localhost/33155] zookeeper.ZKUtil(164): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 20:10:59,212 DEBUG [Listener at localhost/33155] zookeeper.ZKUtil(164): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 20:10:59,212 DEBUG [Listener at localhost/33155] zookeeper.ZKUtil(164): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 20:10:59,213 DEBUG [Listener at localhost/33155] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45907 2023-07-24 20:10:59,213 DEBUG [Listener at localhost/33155] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45907 2023-07-24 20:10:59,213 DEBUG [Listener at localhost/33155] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45907 2023-07-24 20:10:59,217 DEBUG [Listener at localhost/33155] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45907 2023-07-24 20:10:59,218 DEBUG [Listener at localhost/33155] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45907 2023-07-24 20:10:59,221 INFO [Listener at localhost/33155] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 20:10:59,221 INFO [Listener at localhost/33155] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 20:10:59,221 INFO [Listener at localhost/33155] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 20:10:59,222 INFO [Listener at localhost/33155] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-24 20:10:59,222 INFO [Listener at localhost/33155] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 20:10:59,222 INFO [Listener at localhost/33155] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 20:10:59,223 INFO [Listener at localhost/33155] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 20:10:59,223 INFO [Listener at localhost/33155] http.HttpServer(1146): Jetty bound to port 35575 2023-07-24 20:10:59,224 INFO [Listener at localhost/33155] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 20:10:59,226 INFO [Listener at localhost/33155] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:59,226 INFO [Listener at localhost/33155] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@735f8834{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/hadoop.log.dir/,AVAILABLE} 2023-07-24 20:10:59,226 INFO [Listener at localhost/33155] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:59,226 INFO [Listener at localhost/33155] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5f454535{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 20:10:59,233 INFO [Listener at localhost/33155] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 20:10:59,234 INFO [Listener at localhost/33155] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 20:10:59,234 INFO [Listener at localhost/33155] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 20:10:59,235 INFO [Listener at localhost/33155] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 20:10:59,236 INFO [Listener at localhost/33155] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:59,237 INFO [Listener at localhost/33155] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6682e202{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-24 20:10:59,239 INFO [Listener at localhost/33155] server.AbstractConnector(333): Started ServerConnector@857110a{HTTP/1.1, (http/1.1)}{0.0.0.0:35575} 2023-07-24 20:10:59,239 INFO [Listener at localhost/33155] server.Server(415): Started @35193ms 2023-07-24 20:10:59,239 INFO [Listener at localhost/33155] master.HMaster(444): hbase.rootdir=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423, hbase.cluster.distributed=false 2023-07-24 20:10:59,259 INFO [Listener at localhost/33155] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 20:10:59,260 INFO [Listener at localhost/33155] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:59,260 INFO [Listener at localhost/33155] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:59,260 INFO [Listener at localhost/33155] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 20:10:59,260 INFO [Listener at localhost/33155] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:59,260 INFO [Listener at localhost/33155] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 20:10:59,260 INFO [Listener at localhost/33155] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 20:10:59,262 INFO [Listener at localhost/33155] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44593 2023-07-24 20:10:59,263 INFO [Listener at localhost/33155] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 20:10:59,264 DEBUG [Listener at localhost/33155] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 20:10:59,264 INFO [Listener at localhost/33155] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:10:59,265 INFO [Listener at localhost/33155] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:10:59,266 INFO [Listener at localhost/33155] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44593 connecting to ZooKeeper ensemble=127.0.0.1:56754 2023-07-24 20:10:59,271 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:445930x0, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 20:10:59,272 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44593-0x10198f54bce0001 connected 2023-07-24 20:10:59,272 DEBUG [Listener at localhost/33155] zookeeper.ZKUtil(164): regionserver:44593-0x10198f54bce0001, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 20:10:59,273 DEBUG [Listener at localhost/33155] zookeeper.ZKUtil(164): regionserver:44593-0x10198f54bce0001, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 20:10:59,273 DEBUG [Listener at localhost/33155] zookeeper.ZKUtil(164): regionserver:44593-0x10198f54bce0001, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 20:10:59,274 DEBUG [Listener at localhost/33155] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44593 2023-07-24 20:10:59,274 DEBUG [Listener at localhost/33155] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44593 2023-07-24 20:10:59,277 DEBUG [Listener at localhost/33155] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44593 2023-07-24 20:10:59,278 DEBUG [Listener at localhost/33155] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44593 2023-07-24 20:10:59,278 DEBUG [Listener at localhost/33155] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44593 2023-07-24 20:10:59,280 INFO [Listener at localhost/33155] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 20:10:59,280 INFO [Listener at localhost/33155] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 20:10:59,280 INFO [Listener at localhost/33155] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 20:10:59,280 INFO [Listener at localhost/33155] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 20:10:59,280 INFO [Listener at localhost/33155] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 20:10:59,280 INFO [Listener at localhost/33155] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 20:10:59,281 INFO [Listener at localhost/33155] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 20:10:59,282 INFO [Listener at localhost/33155] http.HttpServer(1146): Jetty bound to port 32935 2023-07-24 20:10:59,282 INFO [Listener at localhost/33155] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 20:10:59,285 INFO [Listener at localhost/33155] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:59,286 INFO [Listener at localhost/33155] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@98fe701{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/hadoop.log.dir/,AVAILABLE} 2023-07-24 20:10:59,286 INFO [Listener at localhost/33155] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:59,286 INFO [Listener at localhost/33155] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@25816971{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 20:10:59,295 INFO [Listener at localhost/33155] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 20:10:59,296 INFO [Listener at localhost/33155] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 20:10:59,296 INFO [Listener at localhost/33155] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 20:10:59,297 INFO [Listener at localhost/33155] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 20:10:59,298 INFO [Listener at localhost/33155] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:59,299 INFO [Listener at localhost/33155] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@981894e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 20:10:59,300 INFO [Listener at localhost/33155] server.AbstractConnector(333): Started ServerConnector@5e91eb27{HTTP/1.1, (http/1.1)}{0.0.0.0:32935} 2023-07-24 20:10:59,300 INFO [Listener at localhost/33155] server.Server(415): Started @35254ms 2023-07-24 20:10:59,317 INFO [Listener at localhost/33155] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 20:10:59,317 INFO [Listener at localhost/33155] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:59,317 INFO [Listener at localhost/33155] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:59,318 INFO [Listener at localhost/33155] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 20:10:59,318 INFO [Listener at localhost/33155] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:59,318 INFO [Listener at localhost/33155] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 20:10:59,318 INFO [Listener at localhost/33155] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 20:10:59,322 INFO [Listener at localhost/33155] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35907 2023-07-24 20:10:59,322 INFO [Listener at localhost/33155] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 20:10:59,323 DEBUG [Listener at localhost/33155] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 20:10:59,324 INFO [Listener at localhost/33155] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:10:59,324 INFO [Listener at localhost/33155] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:10:59,325 INFO [Listener at localhost/33155] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35907 connecting to ZooKeeper ensemble=127.0.0.1:56754 2023-07-24 20:10:59,329 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:359070x0, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 20:10:59,331 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35907-0x10198f54bce0002 connected 2023-07-24 20:10:59,331 DEBUG [Listener at localhost/33155] zookeeper.ZKUtil(164): regionserver:35907-0x10198f54bce0002, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 20:10:59,332 DEBUG [Listener at localhost/33155] zookeeper.ZKUtil(164): regionserver:35907-0x10198f54bce0002, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 20:10:59,332 DEBUG [Listener at localhost/33155] zookeeper.ZKUtil(164): regionserver:35907-0x10198f54bce0002, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 20:10:59,335 DEBUG [Listener at localhost/33155] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35907 2023-07-24 20:10:59,337 DEBUG [Listener at localhost/33155] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35907 2023-07-24 20:10:59,338 DEBUG [Listener at localhost/33155] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35907 2023-07-24 20:10:59,340 DEBUG [Listener at localhost/33155] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35907 2023-07-24 20:10:59,341 DEBUG [Listener at localhost/33155] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35907 2023-07-24 20:10:59,343 INFO [Listener at localhost/33155] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 20:10:59,343 INFO [Listener at localhost/33155] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 20:10:59,343 INFO [Listener at localhost/33155] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 20:10:59,343 INFO [Listener at localhost/33155] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 20:10:59,343 INFO [Listener at localhost/33155] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 20:10:59,343 INFO [Listener at localhost/33155] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 20:10:59,344 INFO [Listener at localhost/33155] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 20:10:59,344 INFO [Listener at localhost/33155] http.HttpServer(1146): Jetty bound to port 37461 2023-07-24 20:10:59,344 INFO [Listener at localhost/33155] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 20:10:59,351 INFO [Listener at localhost/33155] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:59,351 INFO [Listener at localhost/33155] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1cdf441b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/hadoop.log.dir/,AVAILABLE} 2023-07-24 20:10:59,351 INFO [Listener at localhost/33155] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:59,352 INFO [Listener at localhost/33155] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@62368e3d{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 20:10:59,358 INFO [Listener at localhost/33155] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 20:10:59,359 INFO [Listener at localhost/33155] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 20:10:59,359 INFO [Listener at localhost/33155] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 20:10:59,359 INFO [Listener at localhost/33155] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 20:10:59,360 INFO [Listener at localhost/33155] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:59,361 INFO [Listener at localhost/33155] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@120830ef{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 20:10:59,362 INFO [Listener at localhost/33155] server.AbstractConnector(333): Started ServerConnector@608fcbd6{HTTP/1.1, (http/1.1)}{0.0.0.0:37461} 2023-07-24 20:10:59,362 INFO [Listener at localhost/33155] server.Server(415): Started @35317ms 2023-07-24 20:10:59,378 INFO [Listener at localhost/33155] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 20:10:59,379 INFO [Listener at localhost/33155] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:59,379 INFO [Listener at localhost/33155] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:59,379 INFO [Listener at localhost/33155] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 20:10:59,379 INFO [Listener at localhost/33155] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:10:59,379 INFO [Listener at localhost/33155] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 20:10:59,379 INFO [Listener at localhost/33155] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 20:10:59,381 INFO [Listener at localhost/33155] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41851 2023-07-24 20:10:59,382 INFO [Listener at localhost/33155] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 20:10:59,384 DEBUG [Listener at localhost/33155] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 20:10:59,384 INFO [Listener at localhost/33155] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:10:59,385 INFO [Listener at localhost/33155] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:10:59,386 INFO [Listener at localhost/33155] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41851 connecting to ZooKeeper ensemble=127.0.0.1:56754 2023-07-24 20:10:59,389 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:418510x0, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 20:10:59,391 DEBUG [Listener at localhost/33155] zookeeper.ZKUtil(164): regionserver:418510x0, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 20:10:59,392 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41851-0x10198f54bce0003 connected 2023-07-24 20:10:59,392 DEBUG [Listener at localhost/33155] zookeeper.ZKUtil(164): regionserver:41851-0x10198f54bce0003, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 20:10:59,393 DEBUG [Listener at localhost/33155] zookeeper.ZKUtil(164): regionserver:41851-0x10198f54bce0003, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 20:10:59,393 DEBUG [Listener at localhost/33155] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41851 2023-07-24 20:10:59,394 DEBUG [Listener at localhost/33155] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41851 2023-07-24 20:10:59,395 DEBUG [Listener at localhost/33155] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41851 2023-07-24 20:10:59,395 DEBUG [Listener at localhost/33155] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41851 2023-07-24 20:10:59,396 DEBUG [Listener at localhost/33155] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41851 2023-07-24 20:10:59,397 INFO [Listener at localhost/33155] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 20:10:59,397 INFO [Listener at localhost/33155] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 20:10:59,397 INFO [Listener at localhost/33155] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 20:10:59,398 INFO [Listener at localhost/33155] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 20:10:59,398 INFO [Listener at localhost/33155] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 20:10:59,398 INFO [Listener at localhost/33155] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 20:10:59,398 INFO [Listener at localhost/33155] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 20:10:59,399 INFO [Listener at localhost/33155] http.HttpServer(1146): Jetty bound to port 42039 2023-07-24 20:10:59,399 INFO [Listener at localhost/33155] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 20:10:59,400 INFO [Listener at localhost/33155] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:59,400 INFO [Listener at localhost/33155] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@16f57de1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/hadoop.log.dir/,AVAILABLE} 2023-07-24 20:10:59,400 INFO [Listener at localhost/33155] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:59,401 INFO [Listener at localhost/33155] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1173ffd7{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 20:10:59,405 INFO [Listener at localhost/33155] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 20:10:59,406 INFO [Listener at localhost/33155] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 20:10:59,407 INFO [Listener at localhost/33155] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 20:10:59,407 INFO [Listener at localhost/33155] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 20:10:59,408 INFO [Listener at localhost/33155] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:10:59,409 INFO [Listener at localhost/33155] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5aa4b84a{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 20:10:59,410 INFO [Listener at localhost/33155] server.AbstractConnector(333): Started ServerConnector@1d36fc51{HTTP/1.1, (http/1.1)}{0.0.0.0:42039} 2023-07-24 20:10:59,411 INFO [Listener at localhost/33155] server.Server(415): Started @35365ms 2023-07-24 20:10:59,413 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 20:10:59,418 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@6687562e{HTTP/1.1, (http/1.1)}{0.0.0.0:44091} 2023-07-24 20:10:59,418 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @35373ms 2023-07-24 20:10:59,419 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,45907,1690229459172 2023-07-24 20:10:59,421 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 20:10:59,421 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,45907,1690229459172 2023-07-24 20:10:59,424 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 20:10:59,424 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:41851-0x10198f54bce0003, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 20:10:59,424 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:35907-0x10198f54bce0002, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 20:10:59,424 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:10:59,425 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:44593-0x10198f54bce0001, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 20:10:59,425 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 20:10:59,426 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,45907,1690229459172 from backup master directory 2023-07-24 20:10:59,432 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,45907,1690229459172 2023-07-24 20:10:59,433 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 20:10:59,433 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 20:10:59,433 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 20:10:59,433 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,45907,1690229459172 2023-07-24 20:10:59,461 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/hbase.id with ID: ca3a721f-445e-4a15-8150-31cd4efd04d3 2023-07-24 20:10:59,475 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:10:59,478 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:10:59,488 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x5e83b336 to 127.0.0.1:56754 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 20:10:59,494 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2ed1797f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 20:10:59,494 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 20:10:59,494 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-24 20:10:59,495 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 20:10:59,496 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/MasterData/data/master/store-tmp 2023-07-24 20:10:59,508 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:59,508 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 20:10:59,508 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 20:10:59,508 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 20:10:59,508 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 20:10:59,509 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 20:10:59,509 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 20:10:59,509 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 20:10:59,509 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/MasterData/WALs/jenkins-hbase4.apache.org,45907,1690229459172 2023-07-24 20:10:59,512 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45907%2C1690229459172, suffix=, logDir=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/MasterData/WALs/jenkins-hbase4.apache.org,45907,1690229459172, archiveDir=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/MasterData/oldWALs, maxLogs=10 2023-07-24 20:10:59,529 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39697,DS-a473fda0-70eb-4ad4-8023-e583bb4c6b17,DISK] 2023-07-24 20:10:59,530 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45801,DS-f14ec800-3e52-411a-9a3d-5e242c57be58,DISK] 2023-07-24 20:10:59,529 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42709,DS-7390ff1b-060a-43d8-af8b-85103d48df07,DISK] 2023-07-24 20:10:59,533 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/MasterData/WALs/jenkins-hbase4.apache.org,45907,1690229459172/jenkins-hbase4.apache.org%2C45907%2C1690229459172.1690229459512 2023-07-24 20:10:59,533 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42709,DS-7390ff1b-060a-43d8-af8b-85103d48df07,DISK], DatanodeInfoWithStorage[127.0.0.1:39697,DS-a473fda0-70eb-4ad4-8023-e583bb4c6b17,DISK], DatanodeInfoWithStorage[127.0.0.1:45801,DS-f14ec800-3e52-411a-9a3d-5e242c57be58,DISK]] 2023-07-24 20:10:59,533 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:10:59,534 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:59,534 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 20:10:59,534 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 20:10:59,536 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-24 20:10:59,538 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-24 20:10:59,538 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-24 20:10:59,539 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:59,540 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 20:10:59,540 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 20:10:59,543 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 20:10:59,545 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:10:59,546 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9617165280, jitterRate=-0.10433168709278107}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:10:59,546 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 20:10:59,546 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-24 20:10:59,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-24 20:10:59,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-24 20:10:59,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-24 20:10:59,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-24 20:10:59,548 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-24 20:10:59,548 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-24 20:10:59,549 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-24 20:10:59,550 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-24 20:10:59,551 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-24 20:10:59,551 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-24 20:10:59,552 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-24 20:10:59,554 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:10:59,554 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-24 20:10:59,555 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-24 20:10:59,556 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-24 20:10:59,557 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:41851-0x10198f54bce0003, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 20:10:59,557 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:44593-0x10198f54bce0001, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 20:10:59,557 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:35907-0x10198f54bce0002, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 20:10:59,557 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 20:10:59,557 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:10:59,558 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,45907,1690229459172, sessionid=0x10198f54bce0000, setting cluster-up flag (Was=false) 2023-07-24 20:10:59,563 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:10:59,567 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-24 20:10:59,568 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,45907,1690229459172 2023-07-24 20:10:59,577 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:10:59,582 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-24 20:10:59,583 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,45907,1690229459172 2023-07-24 20:10:59,584 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/.hbase-snapshot/.tmp 2023-07-24 20:10:59,586 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-24 20:10:59,586 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-24 20:10:59,586 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-24 20:10:59,587 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45907,1690229459172] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 20:10:59,587 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-24 20:10:59,588 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-24 20:10:59,589 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-24 20:10:59,601 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 20:10:59,601 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 20:10:59,601 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 20:10:59,602 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 20:10:59,602 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 20:10:59,602 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 20:10:59,602 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 20:10:59,602 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 20:10:59,602 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-24 20:10:59,602 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,602 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 20:10:59,602 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,608 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690229489608 2023-07-24 20:10:59,608 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-24 20:10:59,608 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-24 20:10:59,608 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-24 20:10:59,608 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-24 20:10:59,608 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-24 20:10:59,608 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-24 20:10:59,609 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,609 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 20:10:59,609 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-24 20:10:59,610 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-24 20:10:59,610 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-24 20:10:59,610 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-24 20:10:59,611 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-24 20:10:59,611 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-24 20:10:59,611 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690229459611,5,FailOnTimeoutGroup] 2023-07-24 20:10:59,611 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690229459611,5,FailOnTimeoutGroup] 2023-07-24 20:10:59,611 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,611 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 20:10:59,615 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-24 20:10:59,615 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,615 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,619 INFO [RS:1;jenkins-hbase4:35907] regionserver.HRegionServer(951): ClusterId : ca3a721f-445e-4a15-8150-31cd4efd04d3 2023-07-24 20:10:59,619 INFO [RS:2;jenkins-hbase4:41851] regionserver.HRegionServer(951): ClusterId : ca3a721f-445e-4a15-8150-31cd4efd04d3 2023-07-24 20:10:59,620 DEBUG [RS:1;jenkins-hbase4:35907] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 20:10:59,623 DEBUG [RS:2;jenkins-hbase4:41851] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 20:10:59,620 INFO [RS:0;jenkins-hbase4:44593] regionserver.HRegionServer(951): ClusterId : ca3a721f-445e-4a15-8150-31cd4efd04d3 2023-07-24 20:10:59,623 DEBUG [RS:0;jenkins-hbase4:44593] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 20:10:59,626 DEBUG [RS:1;jenkins-hbase4:35907] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 20:10:59,626 DEBUG [RS:1;jenkins-hbase4:35907] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 20:10:59,626 DEBUG [RS:0;jenkins-hbase4:44593] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 20:10:59,626 DEBUG [RS:0;jenkins-hbase4:44593] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 20:10:59,626 DEBUG [RS:2;jenkins-hbase4:41851] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 20:10:59,626 DEBUG [RS:2;jenkins-hbase4:41851] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 20:10:59,629 DEBUG [RS:1;jenkins-hbase4:35907] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 20:10:59,630 DEBUG [RS:2;jenkins-hbase4:41851] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 20:10:59,630 DEBUG [RS:0;jenkins-hbase4:44593] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 20:10:59,630 DEBUG [RS:1;jenkins-hbase4:35907] zookeeper.ReadOnlyZKClient(139): Connect 0x3a43bbc2 to 127.0.0.1:56754 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 20:10:59,631 DEBUG [RS:2;jenkins-hbase4:41851] zookeeper.ReadOnlyZKClient(139): Connect 0x0d836ca2 to 127.0.0.1:56754 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 20:10:59,631 DEBUG [RS:0;jenkins-hbase4:44593] zookeeper.ReadOnlyZKClient(139): Connect 0x47c987d1 to 127.0.0.1:56754 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 20:10:59,640 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 20:10:59,640 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 20:10:59,640 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423 2023-07-24 20:10:59,646 DEBUG [RS:2;jenkins-hbase4:41851] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3449362, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 20:10:59,646 DEBUG [RS:1;jenkins-hbase4:35907] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@348452fa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 20:10:59,647 DEBUG [RS:0;jenkins-hbase4:44593] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4d31eca1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 20:10:59,647 DEBUG [RS:1;jenkins-hbase4:35907] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@75b958c1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 20:10:59,647 DEBUG [RS:2;jenkins-hbase4:41851] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@42799959, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 20:10:59,647 DEBUG [RS:0;jenkins-hbase4:44593] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@8336b63, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 20:10:59,658 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:10:59,659 DEBUG [RS:1;jenkins-hbase4:35907] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:35907 2023-07-24 20:10:59,659 DEBUG [RS:2;jenkins-hbase4:41851] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:41851 2023-07-24 20:10:59,659 DEBUG [RS:0;jenkins-hbase4:44593] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:44593 2023-07-24 20:10:59,659 INFO [RS:2;jenkins-hbase4:41851] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 20:10:59,659 INFO [RS:0;jenkins-hbase4:44593] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 20:10:59,659 INFO [RS:0;jenkins-hbase4:44593] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 20:10:59,659 INFO [RS:1;jenkins-hbase4:35907] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 20:10:59,659 INFO [RS:1;jenkins-hbase4:35907] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 20:10:59,659 DEBUG [RS:0;jenkins-hbase4:44593] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 20:10:59,659 INFO [RS:2;jenkins-hbase4:41851] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 20:10:59,659 DEBUG [RS:2;jenkins-hbase4:41851] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 20:10:59,659 DEBUG [RS:1;jenkins-hbase4:35907] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 20:10:59,660 INFO [RS:0;jenkins-hbase4:44593] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45907,1690229459172 with isa=jenkins-hbase4.apache.org/172.31.14.131:44593, startcode=1690229459259 2023-07-24 20:10:59,660 INFO [RS:2;jenkins-hbase4:41851] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45907,1690229459172 with isa=jenkins-hbase4.apache.org/172.31.14.131:41851, startcode=1690229459378 2023-07-24 20:10:59,660 INFO [RS:1;jenkins-hbase4:35907] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45907,1690229459172 with isa=jenkins-hbase4.apache.org/172.31.14.131:35907, startcode=1690229459316 2023-07-24 20:10:59,660 DEBUG [RS:2;jenkins-hbase4:41851] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 20:10:59,660 DEBUG [RS:0;jenkins-hbase4:44593] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 20:10:59,660 DEBUG [RS:1;jenkins-hbase4:35907] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 20:10:59,662 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35059, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 20:10:59,662 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58179, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 20:10:59,662 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45531, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 20:10:59,663 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45907] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35907,1690229459316 2023-07-24 20:10:59,664 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45907,1690229459172] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 20:10:59,664 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45907,1690229459172] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-24 20:10:59,664 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 20:10:59,664 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45907] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44593,1690229459259 2023-07-24 20:10:59,664 DEBUG [RS:1;jenkins-hbase4:35907] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423 2023-07-24 20:10:59,664 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45907,1690229459172] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 20:10:59,665 DEBUG [RS:1;jenkins-hbase4:35907] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40043 2023-07-24 20:10:59,665 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45907,1690229459172] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-24 20:10:59,665 DEBUG [RS:1;jenkins-hbase4:35907] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35575 2023-07-24 20:10:59,665 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45907] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41851,1690229459378 2023-07-24 20:10:59,665 DEBUG [RS:0;jenkins-hbase4:44593] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423 2023-07-24 20:10:59,665 DEBUG [RS:0;jenkins-hbase4:44593] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40043 2023-07-24 20:10:59,665 DEBUG [RS:0;jenkins-hbase4:44593] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35575 2023-07-24 20:10:59,665 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45907,1690229459172] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 20:10:59,665 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45907,1690229459172] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-24 20:10:59,665 DEBUG [RS:2;jenkins-hbase4:41851] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423 2023-07-24 20:10:59,665 DEBUG [RS:2;jenkins-hbase4:41851] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40043 2023-07-24 20:10:59,665 DEBUG [RS:2;jenkins-hbase4:41851] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35575 2023-07-24 20:10:59,666 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740/info 2023-07-24 20:10:59,666 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:10:59,676 DEBUG [RS:0;jenkins-hbase4:44593] zookeeper.ZKUtil(162): regionserver:44593-0x10198f54bce0001, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44593,1690229459259 2023-07-24 20:10:59,676 WARN [RS:0;jenkins-hbase4:44593] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 20:10:59,676 INFO [RS:0;jenkins-hbase4:44593] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 20:10:59,676 DEBUG [RS:0;jenkins-hbase4:44593] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/WALs/jenkins-hbase4.apache.org,44593,1690229459259 2023-07-24 20:10:59,678 DEBUG [RS:2;jenkins-hbase4:41851] zookeeper.ZKUtil(162): regionserver:41851-0x10198f54bce0003, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41851,1690229459378 2023-07-24 20:10:59,678 WARN [RS:2;jenkins-hbase4:41851] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 20:10:59,678 INFO [RS:2;jenkins-hbase4:41851] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 20:10:59,678 DEBUG [RS:2;jenkins-hbase4:41851] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/WALs/jenkins-hbase4.apache.org,41851,1690229459378 2023-07-24 20:10:59,683 DEBUG [RS:1;jenkins-hbase4:35907] zookeeper.ZKUtil(162): regionserver:35907-0x10198f54bce0002, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35907,1690229459316 2023-07-24 20:10:59,683 WARN [RS:1;jenkins-hbase4:35907] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 20:10:59,683 INFO [RS:1;jenkins-hbase4:35907] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 20:10:59,683 DEBUG [RS:1;jenkins-hbase4:35907] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/WALs/jenkins-hbase4.apache.org,35907,1690229459316 2023-07-24 20:10:59,683 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35907,1690229459316] 2023-07-24 20:10:59,683 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44593,1690229459259] 2023-07-24 20:10:59,683 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41851,1690229459378] 2023-07-24 20:10:59,688 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 20:10:59,690 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:59,690 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 20:10:59,701 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740/rep_barrier 2023-07-24 20:10:59,701 DEBUG [RS:0;jenkins-hbase4:44593] zookeeper.ZKUtil(162): regionserver:44593-0x10198f54bce0001, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35907,1690229459316 2023-07-24 20:10:59,701 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 20:10:59,702 DEBUG [RS:2;jenkins-hbase4:41851] zookeeper.ZKUtil(162): regionserver:41851-0x10198f54bce0003, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35907,1690229459316 2023-07-24 20:10:59,702 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:59,702 DEBUG [RS:2;jenkins-hbase4:41851] zookeeper.ZKUtil(162): regionserver:41851-0x10198f54bce0003, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44593,1690229459259 2023-07-24 20:10:59,702 DEBUG [RS:0;jenkins-hbase4:44593] zookeeper.ZKUtil(162): regionserver:44593-0x10198f54bce0001, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44593,1690229459259 2023-07-24 20:10:59,702 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 20:10:59,703 DEBUG [RS:1;jenkins-hbase4:35907] zookeeper.ZKUtil(162): regionserver:35907-0x10198f54bce0002, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35907,1690229459316 2023-07-24 20:10:59,703 DEBUG [RS:2;jenkins-hbase4:41851] zookeeper.ZKUtil(162): regionserver:41851-0x10198f54bce0003, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41851,1690229459378 2023-07-24 20:10:59,703 DEBUG [RS:0;jenkins-hbase4:44593] zookeeper.ZKUtil(162): regionserver:44593-0x10198f54bce0001, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41851,1690229459378 2023-07-24 20:10:59,703 DEBUG [RS:1;jenkins-hbase4:35907] zookeeper.ZKUtil(162): regionserver:35907-0x10198f54bce0002, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44593,1690229459259 2023-07-24 20:10:59,704 DEBUG [RS:1;jenkins-hbase4:35907] zookeeper.ZKUtil(162): regionserver:35907-0x10198f54bce0002, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41851,1690229459378 2023-07-24 20:10:59,704 DEBUG [RS:2;jenkins-hbase4:41851] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 20:10:59,704 DEBUG [RS:0;jenkins-hbase4:44593] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 20:10:59,704 INFO [RS:2;jenkins-hbase4:41851] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 20:10:59,704 INFO [RS:0;jenkins-hbase4:44593] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 20:10:59,705 DEBUG [RS:1;jenkins-hbase4:35907] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 20:10:59,706 INFO [RS:2;jenkins-hbase4:41851] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 20:10:59,707 INFO [RS:1;jenkins-hbase4:35907] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 20:10:59,711 INFO [RS:2;jenkins-hbase4:41851] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 20:10:59,711 INFO [RS:2;jenkins-hbase4:41851] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,711 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740/table 2023-07-24 20:10:59,711 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 20:10:59,712 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:10:59,713 INFO [RS:1;jenkins-hbase4:35907] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 20:10:59,713 INFO [RS:0;jenkins-hbase4:44593] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 20:10:59,713 INFO [RS:2;jenkins-hbase4:41851] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 20:10:59,714 INFO [RS:1;jenkins-hbase4:35907] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 20:10:59,714 INFO [RS:1;jenkins-hbase4:35907] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,714 INFO [RS:0;jenkins-hbase4:44593] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 20:10:59,714 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740 2023-07-24 20:10:59,714 INFO [RS:0;jenkins-hbase4:44593] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,715 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740 2023-07-24 20:10:59,717 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 20:10:59,719 INFO [RS:1;jenkins-hbase4:35907] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 20:10:59,719 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 20:10:59,720 INFO [RS:2;jenkins-hbase4:41851] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,721 DEBUG [RS:2;jenkins-hbase4:41851] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,721 DEBUG [RS:2;jenkins-hbase4:41851] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,721 DEBUG [RS:2;jenkins-hbase4:41851] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,721 DEBUG [RS:2;jenkins-hbase4:41851] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,721 DEBUG [RS:2;jenkins-hbase4:41851] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,721 DEBUG [RS:2;jenkins-hbase4:41851] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 20:10:59,721 DEBUG [RS:2;jenkins-hbase4:41851] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,721 DEBUG [RS:2;jenkins-hbase4:41851] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,721 DEBUG [RS:2;jenkins-hbase4:41851] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,722 DEBUG [RS:2;jenkins-hbase4:41851] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,720 INFO [RS:0;jenkins-hbase4:44593] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 20:10:59,721 INFO [RS:1;jenkins-hbase4:35907] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,728 DEBUG [RS:1;jenkins-hbase4:35907] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,728 DEBUG [RS:1;jenkins-hbase4:35907] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,728 DEBUG [RS:1;jenkins-hbase4:35907] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,728 INFO [RS:2;jenkins-hbase4:41851] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,728 DEBUG [RS:1;jenkins-hbase4:35907] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,728 INFO [RS:2;jenkins-hbase4:41851] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,728 DEBUG [RS:1;jenkins-hbase4:35907] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,728 INFO [RS:2;jenkins-hbase4:41851] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,728 DEBUG [RS:1;jenkins-hbase4:35907] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 20:10:59,729 DEBUG [RS:1;jenkins-hbase4:35907] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,728 INFO [RS:2;jenkins-hbase4:41851] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,729 INFO [RS:0;jenkins-hbase4:44593] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,729 DEBUG [RS:1;jenkins-hbase4:35907] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,729 DEBUG [RS:1;jenkins-hbase4:35907] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,729 DEBUG [RS:0;jenkins-hbase4:44593] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,729 DEBUG [RS:1;jenkins-hbase4:35907] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,729 DEBUG [RS:0;jenkins-hbase4:44593] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,729 DEBUG [RS:0;jenkins-hbase4:44593] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,729 DEBUG [RS:0;jenkins-hbase4:44593] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,729 DEBUG [RS:0;jenkins-hbase4:44593] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,729 DEBUG [RS:0;jenkins-hbase4:44593] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 20:10:59,730 DEBUG [RS:0;jenkins-hbase4:44593] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,730 DEBUG [RS:0;jenkins-hbase4:44593] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,730 DEBUG [RS:0;jenkins-hbase4:44593] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,730 DEBUG [RS:0;jenkins-hbase4:44593] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:10:59,730 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:10:59,731 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11100985760, jitterRate=0.03385986387729645}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 20:10:59,731 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 20:10:59,731 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 20:10:59,731 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 20:10:59,731 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 20:10:59,731 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 20:10:59,732 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 20:10:59,732 INFO [RS:0;jenkins-hbase4:44593] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,732 INFO [RS:0;jenkins-hbase4:44593] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,732 INFO [RS:0;jenkins-hbase4:44593] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,732 INFO [RS:0;jenkins-hbase4:44593] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,789 INFO [RS:1;jenkins-hbase4:35907] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,789 INFO [RS:1;jenkins-hbase4:35907] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,789 INFO [RS:1;jenkins-hbase4:35907] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,789 INFO [RS:1;jenkins-hbase4:35907] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,797 INFO [RS:2;jenkins-hbase4:41851] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 20:10:59,797 INFO [RS:2;jenkins-hbase4:41851] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41851,1690229459378-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,804 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 20:10:59,804 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 20:10:59,805 INFO [RS:0;jenkins-hbase4:44593] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 20:10:59,805 INFO [RS:0;jenkins-hbase4:44593] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44593,1690229459259-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,806 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 20:10:59,806 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-24 20:10:59,806 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-24 20:10:59,812 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-24 20:10:59,815 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-24 20:10:59,818 INFO [RS:1;jenkins-hbase4:35907] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 20:10:59,819 INFO [RS:1;jenkins-hbase4:35907] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35907,1690229459316-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,828 INFO [RS:2;jenkins-hbase4:41851] regionserver.Replication(203): jenkins-hbase4.apache.org,41851,1690229459378 started 2023-07-24 20:10:59,829 INFO [RS:2;jenkins-hbase4:41851] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41851,1690229459378, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41851, sessionid=0x10198f54bce0003 2023-07-24 20:10:59,829 DEBUG [RS:2;jenkins-hbase4:41851] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 20:10:59,829 DEBUG [RS:2;jenkins-hbase4:41851] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41851,1690229459378 2023-07-24 20:10:59,829 DEBUG [RS:2;jenkins-hbase4:41851] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41851,1690229459378' 2023-07-24 20:10:59,829 DEBUG [RS:2;jenkins-hbase4:41851] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 20:10:59,829 INFO [RS:0;jenkins-hbase4:44593] regionserver.Replication(203): jenkins-hbase4.apache.org,44593,1690229459259 started 2023-07-24 20:10:59,830 INFO [RS:0;jenkins-hbase4:44593] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44593,1690229459259, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44593, sessionid=0x10198f54bce0001 2023-07-24 20:10:59,830 DEBUG [RS:2;jenkins-hbase4:41851] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 20:10:59,830 DEBUG [RS:0;jenkins-hbase4:44593] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 20:10:59,830 DEBUG [RS:0;jenkins-hbase4:44593] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44593,1690229459259 2023-07-24 20:10:59,830 DEBUG [RS:0;jenkins-hbase4:44593] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44593,1690229459259' 2023-07-24 20:10:59,830 DEBUG [RS:0;jenkins-hbase4:44593] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 20:10:59,830 DEBUG [RS:2;jenkins-hbase4:41851] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 20:10:59,830 DEBUG [RS:2;jenkins-hbase4:41851] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 20:10:59,830 DEBUG [RS:2;jenkins-hbase4:41851] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41851,1690229459378 2023-07-24 20:10:59,830 DEBUG [RS:2;jenkins-hbase4:41851] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41851,1690229459378' 2023-07-24 20:10:59,830 DEBUG [RS:2;jenkins-hbase4:41851] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 20:10:59,830 DEBUG [RS:0;jenkins-hbase4:44593] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 20:10:59,831 DEBUG [RS:2;jenkins-hbase4:41851] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 20:10:59,831 DEBUG [RS:0;jenkins-hbase4:44593] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 20:10:59,831 DEBUG [RS:0;jenkins-hbase4:44593] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 20:10:59,831 DEBUG [RS:0;jenkins-hbase4:44593] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44593,1690229459259 2023-07-24 20:10:59,831 DEBUG [RS:0;jenkins-hbase4:44593] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44593,1690229459259' 2023-07-24 20:10:59,831 DEBUG [RS:0;jenkins-hbase4:44593] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 20:10:59,833 DEBUG [RS:2;jenkins-hbase4:41851] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 20:10:59,833 INFO [RS:2;jenkins-hbase4:41851] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-24 20:10:59,833 DEBUG [RS:0;jenkins-hbase4:44593] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 20:10:59,834 DEBUG [RS:0;jenkins-hbase4:44593] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 20:10:59,834 INFO [RS:0;jenkins-hbase4:44593] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-24 20:10:59,836 INFO [RS:2;jenkins-hbase4:41851] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,836 INFO [RS:0;jenkins-hbase4:44593] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,836 DEBUG [RS:2;jenkins-hbase4:41851] zookeeper.ZKUtil(398): regionserver:41851-0x10198f54bce0003, quorum=127.0.0.1:56754, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-24 20:10:59,836 INFO [RS:2;jenkins-hbase4:41851] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-24 20:10:59,836 DEBUG [RS:0;jenkins-hbase4:44593] zookeeper.ZKUtil(398): regionserver:44593-0x10198f54bce0001, quorum=127.0.0.1:56754, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-24 20:10:59,836 INFO [RS:0;jenkins-hbase4:44593] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-24 20:10:59,837 INFO [RS:0;jenkins-hbase4:44593] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,837 INFO [RS:2;jenkins-hbase4:41851] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,837 INFO [RS:0;jenkins-hbase4:44593] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,838 INFO [RS:2;jenkins-hbase4:41851] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,842 INFO [RS:1;jenkins-hbase4:35907] regionserver.Replication(203): jenkins-hbase4.apache.org,35907,1690229459316 started 2023-07-24 20:10:59,842 INFO [RS:1;jenkins-hbase4:35907] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35907,1690229459316, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35907, sessionid=0x10198f54bce0002 2023-07-24 20:10:59,842 DEBUG [RS:1;jenkins-hbase4:35907] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 20:10:59,842 DEBUG [RS:1;jenkins-hbase4:35907] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35907,1690229459316 2023-07-24 20:10:59,842 DEBUG [RS:1;jenkins-hbase4:35907] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35907,1690229459316' 2023-07-24 20:10:59,842 DEBUG [RS:1;jenkins-hbase4:35907] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 20:10:59,842 DEBUG [RS:1;jenkins-hbase4:35907] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 20:10:59,842 DEBUG [RS:1;jenkins-hbase4:35907] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 20:10:59,842 DEBUG [RS:1;jenkins-hbase4:35907] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 20:10:59,842 DEBUG [RS:1;jenkins-hbase4:35907] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35907,1690229459316 2023-07-24 20:10:59,843 DEBUG [RS:1;jenkins-hbase4:35907] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35907,1690229459316' 2023-07-24 20:10:59,843 DEBUG [RS:1;jenkins-hbase4:35907] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 20:10:59,843 DEBUG [RS:1;jenkins-hbase4:35907] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 20:10:59,843 DEBUG [RS:1;jenkins-hbase4:35907] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 20:10:59,843 INFO [RS:1;jenkins-hbase4:35907] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-24 20:10:59,843 INFO [RS:1;jenkins-hbase4:35907] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,844 DEBUG [RS:1;jenkins-hbase4:35907] zookeeper.ZKUtil(398): regionserver:35907-0x10198f54bce0002, quorum=127.0.0.1:56754, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-24 20:10:59,844 INFO [RS:1;jenkins-hbase4:35907] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-24 20:10:59,844 INFO [RS:1;jenkins-hbase4:35907] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,844 INFO [RS:1;jenkins-hbase4:35907] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:10:59,941 INFO [RS:0;jenkins-hbase4:44593] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44593%2C1690229459259, suffix=, logDir=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/WALs/jenkins-hbase4.apache.org,44593,1690229459259, archiveDir=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/oldWALs, maxLogs=32 2023-07-24 20:10:59,941 INFO [RS:2;jenkins-hbase4:41851] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41851%2C1690229459378, suffix=, logDir=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/WALs/jenkins-hbase4.apache.org,41851,1690229459378, archiveDir=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/oldWALs, maxLogs=32 2023-07-24 20:10:59,946 INFO [RS:1;jenkins-hbase4:35907] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35907%2C1690229459316, suffix=, logDir=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/WALs/jenkins-hbase4.apache.org,35907,1690229459316, archiveDir=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/oldWALs, maxLogs=32 2023-07-24 20:10:59,968 DEBUG [jenkins-hbase4:45907] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 20:10:59,968 DEBUG [jenkins-hbase4:45907] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 20:10:59,969 DEBUG [jenkins-hbase4:45907] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 20:10:59,969 DEBUG [jenkins-hbase4:45907] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 20:10:59,969 DEBUG [jenkins-hbase4:45907] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 20:10:59,969 DEBUG [jenkins-hbase4:45907] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 20:10:59,999 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45801,DS-f14ec800-3e52-411a-9a3d-5e242c57be58,DISK] 2023-07-24 20:11:00,000 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39697,DS-a473fda0-70eb-4ad4-8023-e583bb4c6b17,DISK] 2023-07-24 20:11:00,000 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42709,DS-7390ff1b-060a-43d8-af8b-85103d48df07,DISK] 2023-07-24 20:11:00,000 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41851,1690229459378, state=OPENING 2023-07-24 20:11:00,002 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-24 20:11:00,003 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:11:00,004 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41851,1690229459378}] 2023-07-24 20:11:00,004 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 20:11:00,072 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45801,DS-f14ec800-3e52-411a-9a3d-5e242c57be58,DISK] 2023-07-24 20:11:00,072 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42709,DS-7390ff1b-060a-43d8-af8b-85103d48df07,DISK] 2023-07-24 20:11:00,072 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39697,DS-a473fda0-70eb-4ad4-8023-e583bb4c6b17,DISK] 2023-07-24 20:11:00,075 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45801,DS-f14ec800-3e52-411a-9a3d-5e242c57be58,DISK] 2023-07-24 20:11:00,075 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42709,DS-7390ff1b-060a-43d8-af8b-85103d48df07,DISK] 2023-07-24 20:11:00,075 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39697,DS-a473fda0-70eb-4ad4-8023-e583bb4c6b17,DISK] 2023-07-24 20:11:00,080 INFO [RS:2;jenkins-hbase4:41851] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/WALs/jenkins-hbase4.apache.org,41851,1690229459378/jenkins-hbase4.apache.org%2C41851%2C1690229459378.1690229459945 2023-07-24 20:11:00,103 INFO [RS:1;jenkins-hbase4:35907] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/WALs/jenkins-hbase4.apache.org,35907,1690229459316/jenkins-hbase4.apache.org%2C35907%2C1690229459316.1690229459946 2023-07-24 20:11:00,107 INFO [RS:0;jenkins-hbase4:44593] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/WALs/jenkins-hbase4.apache.org,44593,1690229459259/jenkins-hbase4.apache.org%2C44593%2C1690229459259.1690229459945 2023-07-24 20:11:00,107 DEBUG [RS:2;jenkins-hbase4:41851] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45801,DS-f14ec800-3e52-411a-9a3d-5e242c57be58,DISK], DatanodeInfoWithStorage[127.0.0.1:39697,DS-a473fda0-70eb-4ad4-8023-e583bb4c6b17,DISK], DatanodeInfoWithStorage[127.0.0.1:42709,DS-7390ff1b-060a-43d8-af8b-85103d48df07,DISK]] 2023-07-24 20:11:00,108 DEBUG [RS:1;jenkins-hbase4:35907] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45801,DS-f14ec800-3e52-411a-9a3d-5e242c57be58,DISK], DatanodeInfoWithStorage[127.0.0.1:39697,DS-a473fda0-70eb-4ad4-8023-e583bb4c6b17,DISK], DatanodeInfoWithStorage[127.0.0.1:42709,DS-7390ff1b-060a-43d8-af8b-85103d48df07,DISK]] 2023-07-24 20:11:00,108 DEBUG [RS:0;jenkins-hbase4:44593] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39697,DS-a473fda0-70eb-4ad4-8023-e583bb4c6b17,DISK], DatanodeInfoWithStorage[127.0.0.1:42709,DS-7390ff1b-060a-43d8-af8b-85103d48df07,DISK], DatanodeInfoWithStorage[127.0.0.1:45801,DS-f14ec800-3e52-411a-9a3d-5e242c57be58,DISK]] 2023-07-24 20:11:00,133 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-24 20:11:00,133 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver Metrics about HBase MasterObservers 2023-07-24 20:11:00,133 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 20:11:00,203 WARN [ReadOnlyZKClient-127.0.0.1:56754@0x5e83b336] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-24 20:11:00,204 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45907,1690229459172] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 20:11:00,205 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39976, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 20:11:00,206 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41851] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:39976 deadline: 1690229520206, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,41851,1690229459378 2023-07-24 20:11:00,240 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41851,1690229459378 2023-07-24 20:11:00,243 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 20:11:00,244 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39982, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 20:11:00,248 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-24 20:11:00,248 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 20:11:00,251 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41851%2C1690229459378.meta, suffix=.meta, logDir=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/WALs/jenkins-hbase4.apache.org,41851,1690229459378, archiveDir=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/oldWALs, maxLogs=32 2023-07-24 20:11:00,272 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39697,DS-a473fda0-70eb-4ad4-8023-e583bb4c6b17,DISK] 2023-07-24 20:11:00,272 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45801,DS-f14ec800-3e52-411a-9a3d-5e242c57be58,DISK] 2023-07-24 20:11:00,272 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42709,DS-7390ff1b-060a-43d8-af8b-85103d48df07,DISK] 2023-07-24 20:11:00,277 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/WALs/jenkins-hbase4.apache.org,41851,1690229459378/jenkins-hbase4.apache.org%2C41851%2C1690229459378.meta.1690229460252.meta 2023-07-24 20:11:00,278 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39697,DS-a473fda0-70eb-4ad4-8023-e583bb4c6b17,DISK], DatanodeInfoWithStorage[127.0.0.1:45801,DS-f14ec800-3e52-411a-9a3d-5e242c57be58,DISK], DatanodeInfoWithStorage[127.0.0.1:42709,DS-7390ff1b-060a-43d8-af8b-85103d48df07,DISK]] 2023-07-24 20:11:00,278 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:11:00,278 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 20:11:00,278 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-24 20:11:00,278 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-24 20:11:00,278 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-24 20:11:00,278 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:11:00,278 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-24 20:11:00,278 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-24 20:11:00,280 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 20:11:00,281 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740/info 2023-07-24 20:11:00,281 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740/info 2023-07-24 20:11:00,281 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 20:11:00,282 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:11:00,282 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 20:11:00,283 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740/rep_barrier 2023-07-24 20:11:00,283 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740/rep_barrier 2023-07-24 20:11:00,284 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 20:11:00,284 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:11:00,284 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 20:11:00,285 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740/table 2023-07-24 20:11:00,285 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740/table 2023-07-24 20:11:00,286 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 20:11:00,287 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:11:00,289 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740 2023-07-24 20:11:00,294 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740 2023-07-24 20:11:00,297 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 20:11:00,299 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 20:11:00,300 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9451103200, jitterRate=-0.11979742348194122}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 20:11:00,300 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 20:11:00,303 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690229460240 2023-07-24 20:11:00,308 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-24 20:11:00,309 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-24 20:11:00,309 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41851,1690229459378, state=OPEN 2023-07-24 20:11:00,310 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 20:11:00,311 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 20:11:00,312 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-24 20:11:00,313 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41851,1690229459378 in 307 msec 2023-07-24 20:11:00,314 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-24 20:11:00,314 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 506 msec 2023-07-24 20:11:00,316 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 727 msec 2023-07-24 20:11:00,316 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690229460316, completionTime=-1 2023-07-24 20:11:00,316 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-24 20:11:00,316 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-24 20:11:00,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-24 20:11:00,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690229520324 2023-07-24 20:11:00,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690229580324 2023-07-24 20:11:00,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 8 msec 2023-07-24 20:11:00,331 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45907,1690229459172-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:00,331 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45907,1690229459172-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:00,331 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45907,1690229459172-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:00,331 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:45907, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:00,331 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:00,331 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-24 20:11:00,332 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 20:11:00,333 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-24 20:11:00,339 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-24 20:11:00,340 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 20:11:00,341 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 20:11:00,342 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/.tmp/data/hbase/namespace/6cbd7fa4cc38a3370728b38f07e85f99 2023-07-24 20:11:00,343 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/.tmp/data/hbase/namespace/6cbd7fa4cc38a3370728b38f07e85f99 empty. 2023-07-24 20:11:00,344 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/.tmp/data/hbase/namespace/6cbd7fa4cc38a3370728b38f07e85f99 2023-07-24 20:11:00,344 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-24 20:11:00,361 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-24 20:11:00,362 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6cbd7fa4cc38a3370728b38f07e85f99, NAME => 'hbase:namespace,,1690229460331.6cbd7fa4cc38a3370728b38f07e85f99.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/.tmp 2023-07-24 20:11:00,372 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690229460331.6cbd7fa4cc38a3370728b38f07e85f99.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:11:00,372 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 6cbd7fa4cc38a3370728b38f07e85f99, disabling compactions & flushes 2023-07-24 20:11:00,372 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690229460331.6cbd7fa4cc38a3370728b38f07e85f99. 2023-07-24 20:11:00,372 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690229460331.6cbd7fa4cc38a3370728b38f07e85f99. 2023-07-24 20:11:00,372 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690229460331.6cbd7fa4cc38a3370728b38f07e85f99. after waiting 0 ms 2023-07-24 20:11:00,372 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690229460331.6cbd7fa4cc38a3370728b38f07e85f99. 2023-07-24 20:11:00,372 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690229460331.6cbd7fa4cc38a3370728b38f07e85f99. 2023-07-24 20:11:00,372 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 6cbd7fa4cc38a3370728b38f07e85f99: 2023-07-24 20:11:00,375 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 20:11:00,376 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690229460331.6cbd7fa4cc38a3370728b38f07e85f99.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690229460376"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229460376"}]},"ts":"1690229460376"} 2023-07-24 20:11:00,378 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 20:11:00,379 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 20:11:00,379 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229460379"}]},"ts":"1690229460379"} 2023-07-24 20:11:00,380 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-24 20:11:00,384 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 20:11:00,384 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 20:11:00,384 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 20:11:00,384 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 20:11:00,384 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 20:11:00,384 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=6cbd7fa4cc38a3370728b38f07e85f99, ASSIGN}] 2023-07-24 20:11:00,387 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=6cbd7fa4cc38a3370728b38f07e85f99, ASSIGN 2023-07-24 20:11:00,388 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=6cbd7fa4cc38a3370728b38f07e85f99, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35907,1690229459316; forceNewPlan=false, retain=false 2023-07-24 20:11:00,538 INFO [jenkins-hbase4:45907] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 20:11:00,539 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=6cbd7fa4cc38a3370728b38f07e85f99, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35907,1690229459316 2023-07-24 20:11:00,539 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690229460331.6cbd7fa4cc38a3370728b38f07e85f99.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690229460539"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229460539"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229460539"}]},"ts":"1690229460539"} 2023-07-24 20:11:00,541 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 6cbd7fa4cc38a3370728b38f07e85f99, server=jenkins-hbase4.apache.org,35907,1690229459316}] 2023-07-24 20:11:00,694 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35907,1690229459316 2023-07-24 20:11:00,694 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 20:11:00,696 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44972, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 20:11:00,700 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690229460331.6cbd7fa4cc38a3370728b38f07e85f99. 2023-07-24 20:11:00,700 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6cbd7fa4cc38a3370728b38f07e85f99, NAME => 'hbase:namespace,,1690229460331.6cbd7fa4cc38a3370728b38f07e85f99.', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:11:00,700 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 6cbd7fa4cc38a3370728b38f07e85f99 2023-07-24 20:11:00,700 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690229460331.6cbd7fa4cc38a3370728b38f07e85f99.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:11:00,700 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6cbd7fa4cc38a3370728b38f07e85f99 2023-07-24 20:11:00,700 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6cbd7fa4cc38a3370728b38f07e85f99 2023-07-24 20:11:00,701 INFO [StoreOpener-6cbd7fa4cc38a3370728b38f07e85f99-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 6cbd7fa4cc38a3370728b38f07e85f99 2023-07-24 20:11:00,703 DEBUG [StoreOpener-6cbd7fa4cc38a3370728b38f07e85f99-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/namespace/6cbd7fa4cc38a3370728b38f07e85f99/info 2023-07-24 20:11:00,703 DEBUG [StoreOpener-6cbd7fa4cc38a3370728b38f07e85f99-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/namespace/6cbd7fa4cc38a3370728b38f07e85f99/info 2023-07-24 20:11:00,703 INFO [StoreOpener-6cbd7fa4cc38a3370728b38f07e85f99-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6cbd7fa4cc38a3370728b38f07e85f99 columnFamilyName info 2023-07-24 20:11:00,704 INFO [StoreOpener-6cbd7fa4cc38a3370728b38f07e85f99-1] regionserver.HStore(310): Store=6cbd7fa4cc38a3370728b38f07e85f99/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:11:00,704 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/namespace/6cbd7fa4cc38a3370728b38f07e85f99 2023-07-24 20:11:00,705 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/namespace/6cbd7fa4cc38a3370728b38f07e85f99 2023-07-24 20:11:00,708 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6cbd7fa4cc38a3370728b38f07e85f99 2023-07-24 20:11:00,708 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45907,1690229459172] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 20:11:00,710 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45907,1690229459172] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-24 20:11:00,712 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 20:11:00,713 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/namespace/6cbd7fa4cc38a3370728b38f07e85f99/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:11:00,713 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 20:11:00,713 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6cbd7fa4cc38a3370728b38f07e85f99; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11375919840, jitterRate=0.05946509540081024}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:11:00,714 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6cbd7fa4cc38a3370728b38f07e85f99: 2023-07-24 20:11:00,714 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690229460331.6cbd7fa4cc38a3370728b38f07e85f99., pid=6, masterSystemTime=1690229460694 2023-07-24 20:11:00,714 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/.tmp/data/hbase/rsgroup/f16f04ba851897df1fa907c1396f37a7 2023-07-24 20:11:00,717 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/.tmp/data/hbase/rsgroup/f16f04ba851897df1fa907c1396f37a7 empty. 2023-07-24 20:11:00,717 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/.tmp/data/hbase/rsgroup/f16f04ba851897df1fa907c1396f37a7 2023-07-24 20:11:00,717 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-24 20:11:00,717 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690229460331.6cbd7fa4cc38a3370728b38f07e85f99. 2023-07-24 20:11:00,718 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690229460331.6cbd7fa4cc38a3370728b38f07e85f99. 2023-07-24 20:11:00,718 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=6cbd7fa4cc38a3370728b38f07e85f99, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35907,1690229459316 2023-07-24 20:11:00,719 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690229460331.6cbd7fa4cc38a3370728b38f07e85f99.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690229460718"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229460718"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229460718"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229460718"}]},"ts":"1690229460718"} 2023-07-24 20:11:00,731 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-24 20:11:00,731 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 6cbd7fa4cc38a3370728b38f07e85f99, server=jenkins-hbase4.apache.org,35907,1690229459316 in 179 msec 2023-07-24 20:11:00,733 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-24 20:11:00,733 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=6cbd7fa4cc38a3370728b38f07e85f99, ASSIGN in 347 msec 2023-07-24 20:11:00,735 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 20:11:00,735 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229460735"}]},"ts":"1690229460735"} 2023-07-24 20:11:00,736 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-24 20:11:00,739 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 20:11:00,741 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 408 msec 2023-07-24 20:11:00,751 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-24 20:11:00,752 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => f16f04ba851897df1fa907c1396f37a7, NAME => 'hbase:rsgroup,,1690229460708.f16f04ba851897df1fa907c1396f37a7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/.tmp 2023-07-24 20:11:00,764 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690229460708.f16f04ba851897df1fa907c1396f37a7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:11:00,764 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing f16f04ba851897df1fa907c1396f37a7, disabling compactions & flushes 2023-07-24 20:11:00,764 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690229460708.f16f04ba851897df1fa907c1396f37a7. 2023-07-24 20:11:00,764 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690229460708.f16f04ba851897df1fa907c1396f37a7. 2023-07-24 20:11:00,764 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690229460708.f16f04ba851897df1fa907c1396f37a7. after waiting 0 ms 2023-07-24 20:11:00,764 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690229460708.f16f04ba851897df1fa907c1396f37a7. 2023-07-24 20:11:00,764 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690229460708.f16f04ba851897df1fa907c1396f37a7. 2023-07-24 20:11:00,764 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for f16f04ba851897df1fa907c1396f37a7: 2023-07-24 20:11:00,767 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 20:11:00,768 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690229460708.f16f04ba851897df1fa907c1396f37a7.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690229460767"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229460767"}]},"ts":"1690229460767"} 2023-07-24 20:11:00,769 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 20:11:00,770 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 20:11:00,770 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229460770"}]},"ts":"1690229460770"} 2023-07-24 20:11:00,771 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-24 20:11:00,775 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 20:11:00,775 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 20:11:00,775 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 20:11:00,775 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 20:11:00,775 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 20:11:00,775 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=f16f04ba851897df1fa907c1396f37a7, ASSIGN}] 2023-07-24 20:11:00,777 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=f16f04ba851897df1fa907c1396f37a7, ASSIGN 2023-07-24 20:11:00,778 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=f16f04ba851897df1fa907c1396f37a7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35907,1690229459316; forceNewPlan=false, retain=false 2023-07-24 20:11:00,834 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-24 20:11:00,835 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-24 20:11:00,835 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:11:00,838 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 20:11:00,840 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44986, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 20:11:00,844 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-24 20:11:00,852 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 20:11:00,854 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-07-24 20:11:00,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-24 20:11:00,872 DEBUG [PEWorker-5] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-24 20:11:00,872 DEBUG [PEWorker-5] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-24 20:11:00,928 INFO [jenkins-hbase4:45907] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 20:11:00,929 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=f16f04ba851897df1fa907c1396f37a7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35907,1690229459316 2023-07-24 20:11:00,930 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690229460708.f16f04ba851897df1fa907c1396f37a7.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690229460929"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229460929"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229460929"}]},"ts":"1690229460929"} 2023-07-24 20:11:00,933 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=8, state=RUNNABLE; OpenRegionProcedure f16f04ba851897df1fa907c1396f37a7, server=jenkins-hbase4.apache.org,35907,1690229459316}] 2023-07-24 20:11:01,089 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690229460708.f16f04ba851897df1fa907c1396f37a7. 2023-07-24 20:11:01,089 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f16f04ba851897df1fa907c1396f37a7, NAME => 'hbase:rsgroup,,1690229460708.f16f04ba851897df1fa907c1396f37a7.', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:11:01,090 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 20:11:01,090 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690229460708.f16f04ba851897df1fa907c1396f37a7. service=MultiRowMutationService 2023-07-24 20:11:01,090 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-24 20:11:01,090 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup f16f04ba851897df1fa907c1396f37a7 2023-07-24 20:11:01,090 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690229460708.f16f04ba851897df1fa907c1396f37a7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:11:01,090 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f16f04ba851897df1fa907c1396f37a7 2023-07-24 20:11:01,090 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f16f04ba851897df1fa907c1396f37a7 2023-07-24 20:11:01,091 INFO [StoreOpener-f16f04ba851897df1fa907c1396f37a7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region f16f04ba851897df1fa907c1396f37a7 2023-07-24 20:11:01,093 DEBUG [StoreOpener-f16f04ba851897df1fa907c1396f37a7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/rsgroup/f16f04ba851897df1fa907c1396f37a7/m 2023-07-24 20:11:01,093 DEBUG [StoreOpener-f16f04ba851897df1fa907c1396f37a7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/rsgroup/f16f04ba851897df1fa907c1396f37a7/m 2023-07-24 20:11:01,093 INFO [StoreOpener-f16f04ba851897df1fa907c1396f37a7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f16f04ba851897df1fa907c1396f37a7 columnFamilyName m 2023-07-24 20:11:01,094 INFO [StoreOpener-f16f04ba851897df1fa907c1396f37a7-1] regionserver.HStore(310): Store=f16f04ba851897df1fa907c1396f37a7/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:11:01,095 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/rsgroup/f16f04ba851897df1fa907c1396f37a7 2023-07-24 20:11:01,095 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/rsgroup/f16f04ba851897df1fa907c1396f37a7 2023-07-24 20:11:01,097 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f16f04ba851897df1fa907c1396f37a7 2023-07-24 20:11:01,099 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/rsgroup/f16f04ba851897df1fa907c1396f37a7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:11:01,100 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f16f04ba851897df1fa907c1396f37a7; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@3ec278b, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:11:01,100 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f16f04ba851897df1fa907c1396f37a7: 2023-07-24 20:11:01,100 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690229460708.f16f04ba851897df1fa907c1396f37a7., pid=11, masterSystemTime=1690229461085 2023-07-24 20:11:01,102 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690229460708.f16f04ba851897df1fa907c1396f37a7. 2023-07-24 20:11:01,102 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690229460708.f16f04ba851897df1fa907c1396f37a7. 2023-07-24 20:11:01,102 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=f16f04ba851897df1fa907c1396f37a7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35907,1690229459316 2023-07-24 20:11:01,102 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690229460708.f16f04ba851897df1fa907c1396f37a7.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690229461102"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229461102"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229461102"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229461102"}]},"ts":"1690229461102"} 2023-07-24 20:11:01,105 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=8 2023-07-24 20:11:01,105 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=8, state=SUCCESS; OpenRegionProcedure f16f04ba851897df1fa907c1396f37a7, server=jenkins-hbase4.apache.org,35907,1690229459316 in 173 msec 2023-07-24 20:11:01,107 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=7 2023-07-24 20:11:01,107 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=7, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=f16f04ba851897df1fa907c1396f37a7, ASSIGN in 330 msec 2023-07-24 20:11:01,117 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 20:11:01,121 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 252 msec 2023-07-24 20:11:01,122 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 20:11:01,122 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229461122"}]},"ts":"1690229461122"} 2023-07-24 20:11:01,123 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-24 20:11:01,131 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 20:11:01,134 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 424 msec 2023-07-24 20:11:01,135 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-24 20:11:01,139 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-24 20:11:01,139 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.706sec 2023-07-24 20:11:01,140 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-24 20:11:01,140 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 20:11:01,141 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-24 20:11:01,141 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-24 20:11:01,144 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-24 20:11:01,149 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 20:11:01,150 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 20:11:01,152 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/.tmp/data/hbase/quota/6719584c4c5de5c40cce8453cd4869a9 2023-07-24 20:11:01,153 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/.tmp/data/hbase/quota/6719584c4c5de5c40cce8453cd4869a9 empty. 2023-07-24 20:11:01,153 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/.tmp/data/hbase/quota/6719584c4c5de5c40cce8453cd4869a9 2023-07-24 20:11:01,153 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-24 20:11:01,155 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-24 20:11:01,155 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-24 20:11:01,158 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:01,159 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:01,159 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-24 20:11:01,159 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-24 20:11:01,159 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45907,1690229459172-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-24 20:11:01,159 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45907,1690229459172-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-24 20:11:01,168 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-24 20:11:01,175 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-24 20:11:01,176 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6719584c4c5de5c40cce8453cd4869a9, NAME => 'hbase:quota,,1690229461140.6719584c4c5de5c40cce8453cd4869a9.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/.tmp 2023-07-24 20:11:01,191 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690229461140.6719584c4c5de5c40cce8453cd4869a9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:11:01,191 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 6719584c4c5de5c40cce8453cd4869a9, disabling compactions & flushes 2023-07-24 20:11:01,191 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690229461140.6719584c4c5de5c40cce8453cd4869a9. 2023-07-24 20:11:01,191 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690229461140.6719584c4c5de5c40cce8453cd4869a9. 2023-07-24 20:11:01,191 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690229461140.6719584c4c5de5c40cce8453cd4869a9. after waiting 0 ms 2023-07-24 20:11:01,191 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690229461140.6719584c4c5de5c40cce8453cd4869a9. 2023-07-24 20:11:01,191 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1690229461140.6719584c4c5de5c40cce8453cd4869a9. 2023-07-24 20:11:01,191 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 6719584c4c5de5c40cce8453cd4869a9: 2023-07-24 20:11:01,194 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 20:11:01,195 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1690229461140.6719584c4c5de5c40cce8453cd4869a9.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690229461195"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229461195"}]},"ts":"1690229461195"} 2023-07-24 20:11:01,196 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 20:11:01,198 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 20:11:01,198 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229461198"}]},"ts":"1690229461198"} 2023-07-24 20:11:01,200 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-24 20:11:01,205 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 20:11:01,205 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 20:11:01,205 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 20:11:01,205 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 20:11:01,205 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 20:11:01,206 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=6719584c4c5de5c40cce8453cd4869a9, ASSIGN}] 2023-07-24 20:11:01,209 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=6719584c4c5de5c40cce8453cd4869a9, ASSIGN 2023-07-24 20:11:01,210 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=6719584c4c5de5c40cce8453cd4869a9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35907,1690229459316; forceNewPlan=false, retain=false 2023-07-24 20:11:01,213 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45907,1690229459172] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-24 20:11:01,213 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45907,1690229459172] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-24 20:11:01,218 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:11:01,219 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45907,1690229459172] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:01,220 DEBUG [Listener at localhost/33155] zookeeper.ReadOnlyZKClient(139): Connect 0x2c408b4b to 127.0.0.1:56754 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 20:11:01,220 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45907,1690229459172] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 20:11:01,225 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45907,1690229459172] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-24 20:11:01,228 DEBUG [Listener at localhost/33155] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3d8d5276, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 20:11:01,229 DEBUG [hconnection-0x4a5e01e4-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 20:11:01,231 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48224, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 20:11:01,232 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,45907,1690229459172 2023-07-24 20:11:01,233 INFO [Listener at localhost/33155] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:11:01,236 DEBUG [Listener at localhost/33155] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-24 20:11:01,237 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56562, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-24 20:11:01,241 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-24 20:11:01,241 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:11:01,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-24 20:11:01,243 DEBUG [Listener at localhost/33155] zookeeper.ReadOnlyZKClient(139): Connect 0x4abcb607 to 127.0.0.1:56754 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 20:11:01,247 DEBUG [Listener at localhost/33155] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7057df95, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 20:11:01,247 INFO [Listener at localhost/33155] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:56754 2023-07-24 20:11:01,250 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 20:11:01,251 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10198f54bce000a connected 2023-07-24 20:11:01,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-24 20:11:01,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-24 20:11:01,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-24 20:11:01,267 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 20:11:01,270 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 14 msec 2023-07-24 20:11:01,360 INFO [jenkins-hbase4:45907] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 20:11:01,361 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=6719584c4c5de5c40cce8453cd4869a9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35907,1690229459316 2023-07-24 20:11:01,361 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1690229461140.6719584c4c5de5c40cce8453cd4869a9.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690229461361"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229461361"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229461361"}]},"ts":"1690229461361"} 2023-07-24 20:11:01,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-24 20:11:01,363 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; OpenRegionProcedure 6719584c4c5de5c40cce8453cd4869a9, server=jenkins-hbase4.apache.org,35907,1690229459316}] 2023-07-24 20:11:01,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 20:11:01,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-24 20:11:01,370 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 20:11:01,370 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-24 20:11:01,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 16 2023-07-24 20:11:01,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-24 20:11:01,375 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:01,376 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 20:11:01,380 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 20:11:01,381 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/.tmp/data/np1/table1/c0be415e7e6c5c32db05610e0299b2f7 2023-07-24 20:11:01,382 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/.tmp/data/np1/table1/c0be415e7e6c5c32db05610e0299b2f7 empty. 2023-07-24 20:11:01,382 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/.tmp/data/np1/table1/c0be415e7e6c5c32db05610e0299b2f7 2023-07-24 20:11:01,382 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-24 20:11:01,407 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-24 20:11:01,410 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => c0be415e7e6c5c32db05610e0299b2f7, NAME => 'np1:table1,,1690229461367.c0be415e7e6c5c32db05610e0299b2f7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/.tmp 2023-07-24 20:11:01,432 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1690229461367.c0be415e7e6c5c32db05610e0299b2f7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:11:01,432 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing c0be415e7e6c5c32db05610e0299b2f7, disabling compactions & flushes 2023-07-24 20:11:01,433 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1690229461367.c0be415e7e6c5c32db05610e0299b2f7. 2023-07-24 20:11:01,433 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1690229461367.c0be415e7e6c5c32db05610e0299b2f7. 2023-07-24 20:11:01,433 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1690229461367.c0be415e7e6c5c32db05610e0299b2f7. after waiting 0 ms 2023-07-24 20:11:01,433 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1690229461367.c0be415e7e6c5c32db05610e0299b2f7. 2023-07-24 20:11:01,433 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1690229461367.c0be415e7e6c5c32db05610e0299b2f7. 2023-07-24 20:11:01,433 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for c0be415e7e6c5c32db05610e0299b2f7: 2023-07-24 20:11:01,440 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 20:11:01,442 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1690229461367.c0be415e7e6c5c32db05610e0299b2f7.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690229461441"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229461441"}]},"ts":"1690229461441"} 2023-07-24 20:11:01,444 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 20:11:01,445 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 20:11:01,445 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229461445"}]},"ts":"1690229461445"} 2023-07-24 20:11:01,446 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-24 20:11:01,450 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 20:11:01,451 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 20:11:01,451 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 20:11:01,451 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 20:11:01,451 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 20:11:01,451 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=c0be415e7e6c5c32db05610e0299b2f7, ASSIGN}] 2023-07-24 20:11:01,452 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=c0be415e7e6c5c32db05610e0299b2f7, ASSIGN 2023-07-24 20:11:01,452 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=c0be415e7e6c5c32db05610e0299b2f7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41851,1690229459378; forceNewPlan=false, retain=false 2023-07-24 20:11:01,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-24 20:11:01,530 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1690229461140.6719584c4c5de5c40cce8453cd4869a9. 2023-07-24 20:11:01,530 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6719584c4c5de5c40cce8453cd4869a9, NAME => 'hbase:quota,,1690229461140.6719584c4c5de5c40cce8453cd4869a9.', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:11:01,531 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 6719584c4c5de5c40cce8453cd4869a9 2023-07-24 20:11:01,531 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690229461140.6719584c4c5de5c40cce8453cd4869a9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:11:01,531 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6719584c4c5de5c40cce8453cd4869a9 2023-07-24 20:11:01,531 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6719584c4c5de5c40cce8453cd4869a9 2023-07-24 20:11:01,543 INFO [StoreOpener-6719584c4c5de5c40cce8453cd4869a9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 6719584c4c5de5c40cce8453cd4869a9 2023-07-24 20:11:01,547 DEBUG [StoreOpener-6719584c4c5de5c40cce8453cd4869a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/quota/6719584c4c5de5c40cce8453cd4869a9/q 2023-07-24 20:11:01,547 DEBUG [StoreOpener-6719584c4c5de5c40cce8453cd4869a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/quota/6719584c4c5de5c40cce8453cd4869a9/q 2023-07-24 20:11:01,548 INFO [StoreOpener-6719584c4c5de5c40cce8453cd4869a9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6719584c4c5de5c40cce8453cd4869a9 columnFamilyName q 2023-07-24 20:11:01,550 INFO [StoreOpener-6719584c4c5de5c40cce8453cd4869a9-1] regionserver.HStore(310): Store=6719584c4c5de5c40cce8453cd4869a9/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:11:01,550 INFO [StoreOpener-6719584c4c5de5c40cce8453cd4869a9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 6719584c4c5de5c40cce8453cd4869a9 2023-07-24 20:11:01,552 DEBUG [StoreOpener-6719584c4c5de5c40cce8453cd4869a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/quota/6719584c4c5de5c40cce8453cd4869a9/u 2023-07-24 20:11:01,552 DEBUG [StoreOpener-6719584c4c5de5c40cce8453cd4869a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/quota/6719584c4c5de5c40cce8453cd4869a9/u 2023-07-24 20:11:01,552 INFO [StoreOpener-6719584c4c5de5c40cce8453cd4869a9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6719584c4c5de5c40cce8453cd4869a9 columnFamilyName u 2023-07-24 20:11:01,553 INFO [StoreOpener-6719584c4c5de5c40cce8453cd4869a9-1] regionserver.HStore(310): Store=6719584c4c5de5c40cce8453cd4869a9/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:11:01,554 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/quota/6719584c4c5de5c40cce8453cd4869a9 2023-07-24 20:11:01,554 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/quota/6719584c4c5de5c40cce8453cd4869a9 2023-07-24 20:11:01,556 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-24 20:11:01,558 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6719584c4c5de5c40cce8453cd4869a9 2023-07-24 20:11:01,560 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/quota/6719584c4c5de5c40cce8453cd4869a9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:11:01,561 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6719584c4c5de5c40cce8453cd4869a9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9704215040, jitterRate=-0.09622454643249512}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-24 20:11:01,561 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6719584c4c5de5c40cce8453cd4869a9: 2023-07-24 20:11:01,562 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1690229461140.6719584c4c5de5c40cce8453cd4869a9., pid=15, masterSystemTime=1690229461526 2023-07-24 20:11:01,563 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1690229461140.6719584c4c5de5c40cce8453cd4869a9. 2023-07-24 20:11:01,563 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1690229461140.6719584c4c5de5c40cce8453cd4869a9. 2023-07-24 20:11:01,564 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=6719584c4c5de5c40cce8453cd4869a9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35907,1690229459316 2023-07-24 20:11:01,564 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1690229461140.6719584c4c5de5c40cce8453cd4869a9.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690229461564"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229461564"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229461564"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229461564"}]},"ts":"1690229461564"} 2023-07-24 20:11:01,568 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-24 20:11:01,568 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; OpenRegionProcedure 6719584c4c5de5c40cce8453cd4869a9, server=jenkins-hbase4.apache.org,35907,1690229459316 in 202 msec 2023-07-24 20:11:01,569 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-24 20:11:01,569 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=6719584c4c5de5c40cce8453cd4869a9, ASSIGN in 363 msec 2023-07-24 20:11:01,570 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 20:11:01,570 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229461570"}]},"ts":"1690229461570"} 2023-07-24 20:11:01,571 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-24 20:11:01,574 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 20:11:01,575 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 434 msec 2023-07-24 20:11:01,603 INFO [jenkins-hbase4:45907] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 20:11:01,604 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=c0be415e7e6c5c32db05610e0299b2f7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41851,1690229459378 2023-07-24 20:11:01,604 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1690229461367.c0be415e7e6c5c32db05610e0299b2f7.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690229461604"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229461604"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229461604"}]},"ts":"1690229461604"} 2023-07-24 20:11:01,606 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure c0be415e7e6c5c32db05610e0299b2f7, server=jenkins-hbase4.apache.org,41851,1690229459378}] 2023-07-24 20:11:01,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-24 20:11:01,761 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1690229461367.c0be415e7e6c5c32db05610e0299b2f7. 2023-07-24 20:11:01,762 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c0be415e7e6c5c32db05610e0299b2f7, NAME => 'np1:table1,,1690229461367.c0be415e7e6c5c32db05610e0299b2f7.', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:11:01,762 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 c0be415e7e6c5c32db05610e0299b2f7 2023-07-24 20:11:01,762 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1690229461367.c0be415e7e6c5c32db05610e0299b2f7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:11:01,762 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c0be415e7e6c5c32db05610e0299b2f7 2023-07-24 20:11:01,762 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c0be415e7e6c5c32db05610e0299b2f7 2023-07-24 20:11:01,763 INFO [StoreOpener-c0be415e7e6c5c32db05610e0299b2f7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region c0be415e7e6c5c32db05610e0299b2f7 2023-07-24 20:11:01,765 DEBUG [StoreOpener-c0be415e7e6c5c32db05610e0299b2f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/np1/table1/c0be415e7e6c5c32db05610e0299b2f7/fam1 2023-07-24 20:11:01,765 DEBUG [StoreOpener-c0be415e7e6c5c32db05610e0299b2f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/np1/table1/c0be415e7e6c5c32db05610e0299b2f7/fam1 2023-07-24 20:11:01,765 INFO [StoreOpener-c0be415e7e6c5c32db05610e0299b2f7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c0be415e7e6c5c32db05610e0299b2f7 columnFamilyName fam1 2023-07-24 20:11:01,766 INFO [StoreOpener-c0be415e7e6c5c32db05610e0299b2f7-1] regionserver.HStore(310): Store=c0be415e7e6c5c32db05610e0299b2f7/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:11:01,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/np1/table1/c0be415e7e6c5c32db05610e0299b2f7 2023-07-24 20:11:01,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/np1/table1/c0be415e7e6c5c32db05610e0299b2f7 2023-07-24 20:11:01,771 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c0be415e7e6c5c32db05610e0299b2f7 2023-07-24 20:11:01,777 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/np1/table1/c0be415e7e6c5c32db05610e0299b2f7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:11:01,777 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c0be415e7e6c5c32db05610e0299b2f7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11340703680, jitterRate=0.05618533492088318}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:11:01,777 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c0be415e7e6c5c32db05610e0299b2f7: 2023-07-24 20:11:01,778 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1690229461367.c0be415e7e6c5c32db05610e0299b2f7., pid=18, masterSystemTime=1690229461758 2023-07-24 20:11:01,780 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1690229461367.c0be415e7e6c5c32db05610e0299b2f7. 2023-07-24 20:11:01,780 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1690229461367.c0be415e7e6c5c32db05610e0299b2f7. 2023-07-24 20:11:01,780 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=c0be415e7e6c5c32db05610e0299b2f7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41851,1690229459378 2023-07-24 20:11:01,780 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1690229461367.c0be415e7e6c5c32db05610e0299b2f7.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690229461780"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229461780"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229461780"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229461780"}]},"ts":"1690229461780"} 2023-07-24 20:11:01,783 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-24 20:11:01,783 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure c0be415e7e6c5c32db05610e0299b2f7, server=jenkins-hbase4.apache.org,41851,1690229459378 in 176 msec 2023-07-24 20:11:01,785 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-24 20:11:01,785 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=c0be415e7e6c5c32db05610e0299b2f7, ASSIGN in 332 msec 2023-07-24 20:11:01,785 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 20:11:01,785 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229461785"}]},"ts":"1690229461785"} 2023-07-24 20:11:01,786 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-24 20:11:01,788 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 20:11:01,790 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; CreateTableProcedure table=np1:table1 in 420 msec 2023-07-24 20:11:01,975 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-24 20:11:01,975 INFO [Listener at localhost/33155] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 16 completed 2023-07-24 20:11:01,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 20:11:01,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-24 20:11:01,979 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 20:11:01,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-24 20:11:01,980 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-24 20:11:01,998 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=22 msec 2023-07-24 20:11:02,080 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-24 20:11:02,083 INFO [Listener at localhost/33155] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-24 20:11:02,084 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:02,085 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:02,085 INFO [Listener at localhost/33155] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-24 20:11:02,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-24 20:11:02,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-24 20:11:02,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 20:11:02,089 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229462089"}]},"ts":"1690229462089"} 2023-07-24 20:11:02,090 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-24 20:11:02,092 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-24 20:11:02,092 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=c0be415e7e6c5c32db05610e0299b2f7, UNASSIGN}] 2023-07-24 20:11:02,094 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=c0be415e7e6c5c32db05610e0299b2f7, UNASSIGN 2023-07-24 20:11:02,095 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=c0be415e7e6c5c32db05610e0299b2f7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41851,1690229459378 2023-07-24 20:11:02,095 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1690229461367.c0be415e7e6c5c32db05610e0299b2f7.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690229462095"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229462095"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229462095"}]},"ts":"1690229462095"} 2023-07-24 20:11:02,096 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure c0be415e7e6c5c32db05610e0299b2f7, server=jenkins-hbase4.apache.org,41851,1690229459378}] 2023-07-24 20:11:02,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 20:11:02,247 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c0be415e7e6c5c32db05610e0299b2f7 2023-07-24 20:11:02,248 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c0be415e7e6c5c32db05610e0299b2f7, disabling compactions & flushes 2023-07-24 20:11:02,249 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1690229461367.c0be415e7e6c5c32db05610e0299b2f7. 2023-07-24 20:11:02,249 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1690229461367.c0be415e7e6c5c32db05610e0299b2f7. 2023-07-24 20:11:02,249 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1690229461367.c0be415e7e6c5c32db05610e0299b2f7. after waiting 0 ms 2023-07-24 20:11:02,249 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1690229461367.c0be415e7e6c5c32db05610e0299b2f7. 2023-07-24 20:11:02,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/np1/table1/c0be415e7e6c5c32db05610e0299b2f7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 20:11:02,253 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1690229461367.c0be415e7e6c5c32db05610e0299b2f7. 2023-07-24 20:11:02,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c0be415e7e6c5c32db05610e0299b2f7: 2023-07-24 20:11:02,255 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c0be415e7e6c5c32db05610e0299b2f7 2023-07-24 20:11:02,255 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=c0be415e7e6c5c32db05610e0299b2f7, regionState=CLOSED 2023-07-24 20:11:02,255 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1690229461367.c0be415e7e6c5c32db05610e0299b2f7.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690229462255"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229462255"}]},"ts":"1690229462255"} 2023-07-24 20:11:02,258 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-24 20:11:02,258 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure c0be415e7e6c5c32db05610e0299b2f7, server=jenkins-hbase4.apache.org,41851,1690229459378 in 160 msec 2023-07-24 20:11:02,259 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-24 20:11:02,259 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=c0be415e7e6c5c32db05610e0299b2f7, UNASSIGN in 166 msec 2023-07-24 20:11:02,260 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229462260"}]},"ts":"1690229462260"} 2023-07-24 20:11:02,261 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-24 20:11:02,263 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-24 20:11:02,264 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 177 msec 2023-07-24 20:11:02,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 20:11:02,391 INFO [Listener at localhost/33155] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-24 20:11:02,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-24 20:11:02,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-24 20:11:02,394 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-24 20:11:02,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-24 20:11:02,395 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-24 20:11:02,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:02,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 20:11:02,398 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/.tmp/data/np1/table1/c0be415e7e6c5c32db05610e0299b2f7 2023-07-24 20:11:02,400 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/.tmp/data/np1/table1/c0be415e7e6c5c32db05610e0299b2f7/fam1, FileablePath, hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/.tmp/data/np1/table1/c0be415e7e6c5c32db05610e0299b2f7/recovered.edits] 2023-07-24 20:11:02,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-24 20:11:02,405 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/.tmp/data/np1/table1/c0be415e7e6c5c32db05610e0299b2f7/recovered.edits/4.seqid to hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/archive/data/np1/table1/c0be415e7e6c5c32db05610e0299b2f7/recovered.edits/4.seqid 2023-07-24 20:11:02,405 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/.tmp/data/np1/table1/c0be415e7e6c5c32db05610e0299b2f7 2023-07-24 20:11:02,405 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-24 20:11:02,408 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-24 20:11:02,409 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-24 20:11:02,411 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-24 20:11:02,412 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-24 20:11:02,412 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-24 20:11:02,412 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1690229461367.c0be415e7e6c5c32db05610e0299b2f7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690229462412"}]},"ts":"9223372036854775807"} 2023-07-24 20:11:02,413 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 20:11:02,413 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => c0be415e7e6c5c32db05610e0299b2f7, NAME => 'np1:table1,,1690229461367.c0be415e7e6c5c32db05610e0299b2f7.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 20:11:02,413 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-24 20:11:02,413 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690229462413"}]},"ts":"9223372036854775807"} 2023-07-24 20:11:02,415 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-24 20:11:02,418 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-24 20:11:02,419 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 26 msec 2023-07-24 20:11:02,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-24 20:11:02,502 INFO [Listener at localhost/33155] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-24 20:11:02,507 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-24 20:11:02,514 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-24 20:11:02,516 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-24 20:11:02,519 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-24 20:11:02,521 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-24 20:11:02,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-24 20:11:02,522 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-24 20:11:02,522 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 20:11:02,523 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-24 20:11:02,524 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-24 20:11:02,525 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 16 msec 2023-07-24 20:11:02,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-24 20:11:02,622 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-24 20:11:02,623 INFO [Listener at localhost/33155] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-24 20:11:02,623 DEBUG [Listener at localhost/33155] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2c408b4b to 127.0.0.1:56754 2023-07-24 20:11:02,623 DEBUG [Listener at localhost/33155] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:11:02,623 DEBUG [Listener at localhost/33155] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-24 20:11:02,623 DEBUG [Listener at localhost/33155] util.JVMClusterUtil(257): Found active master hash=858218315, stopped=false 2023-07-24 20:11:02,623 DEBUG [Listener at localhost/33155] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 20:11:02,623 DEBUG [Listener at localhost/33155] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 20:11:02,623 DEBUG [Listener at localhost/33155] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-24 20:11:02,623 INFO [Listener at localhost/33155] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,45907,1690229459172 2023-07-24 20:11:02,626 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:41851-0x10198f54bce0003, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 20:11:02,626 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 20:11:02,626 INFO [Listener at localhost/33155] procedure2.ProcedureExecutor(629): Stopping 2023-07-24 20:11:02,626 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:11:02,626 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:44593-0x10198f54bce0001, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 20:11:02,626 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:35907-0x10198f54bce0002, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 20:11:02,628 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41851-0x10198f54bce0003, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 20:11:02,628 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44593-0x10198f54bce0001, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 20:11:02,628 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 20:11:02,628 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35907-0x10198f54bce0002, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 20:11:02,629 DEBUG [Listener at localhost/33155] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5e83b336 to 127.0.0.1:56754 2023-07-24 20:11:02,629 DEBUG [Listener at localhost/33155] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:11:02,629 INFO [Listener at localhost/33155] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44593,1690229459259' ***** 2023-07-24 20:11:02,629 INFO [Listener at localhost/33155] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 20:11:02,629 INFO [Listener at localhost/33155] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35907,1690229459316' ***** 2023-07-24 20:11:02,629 INFO [Listener at localhost/33155] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 20:11:02,629 INFO [Listener at localhost/33155] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41851,1690229459378' ***** 2023-07-24 20:11:02,629 INFO [Listener at localhost/33155] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 20:11:02,629 INFO [RS:0;jenkins-hbase4:44593] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 20:11:02,629 INFO [RS:2;jenkins-hbase4:41851] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 20:11:02,629 INFO [RS:1;jenkins-hbase4:35907] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 20:11:02,631 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 20:11:02,631 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 20:11:02,637 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 20:11:02,640 INFO [RS:0;jenkins-hbase4:44593] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@981894e{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 20:11:02,640 INFO [RS:2;jenkins-hbase4:41851] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5aa4b84a{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 20:11:02,640 INFO [RS:1;jenkins-hbase4:35907] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@120830ef{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 20:11:02,640 INFO [RS:0;jenkins-hbase4:44593] server.AbstractConnector(383): Stopped ServerConnector@5e91eb27{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 20:11:02,641 INFO [RS:0;jenkins-hbase4:44593] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 20:11:02,641 INFO [RS:2;jenkins-hbase4:41851] server.AbstractConnector(383): Stopped ServerConnector@1d36fc51{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 20:11:02,641 INFO [RS:1;jenkins-hbase4:35907] server.AbstractConnector(383): Stopped ServerConnector@608fcbd6{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 20:11:02,641 INFO [RS:0;jenkins-hbase4:44593] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@25816971{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 20:11:02,641 INFO [RS:2;jenkins-hbase4:41851] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 20:11:02,643 INFO [RS:0;jenkins-hbase4:44593] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@98fe701{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/hadoop.log.dir/,STOPPED} 2023-07-24 20:11:02,643 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 20:11:02,643 INFO [RS:2;jenkins-hbase4:41851] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1173ffd7{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 20:11:02,641 INFO [RS:1;jenkins-hbase4:35907] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 20:11:02,644 INFO [RS:2;jenkins-hbase4:41851] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@16f57de1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/hadoop.log.dir/,STOPPED} 2023-07-24 20:11:02,645 INFO [RS:1;jenkins-hbase4:35907] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@62368e3d{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 20:11:02,645 INFO [RS:1;jenkins-hbase4:35907] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1cdf441b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/hadoop.log.dir/,STOPPED} 2023-07-24 20:11:02,645 INFO [RS:2;jenkins-hbase4:41851] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 20:11:02,645 INFO [RS:2;jenkins-hbase4:41851] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 20:11:02,645 INFO [RS:2;jenkins-hbase4:41851] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 20:11:02,645 INFO [RS:2;jenkins-hbase4:41851] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41851,1690229459378 2023-07-24 20:11:02,645 INFO [RS:0;jenkins-hbase4:44593] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 20:11:02,645 DEBUG [RS:2;jenkins-hbase4:41851] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0d836ca2 to 127.0.0.1:56754 2023-07-24 20:11:02,645 INFO [RS:0;jenkins-hbase4:44593] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 20:11:02,645 INFO [RS:1;jenkins-hbase4:35907] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 20:11:02,646 INFO [RS:0;jenkins-hbase4:44593] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 20:11:02,647 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 20:11:02,646 DEBUG [RS:2;jenkins-hbase4:41851] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:11:02,647 INFO [RS:0;jenkins-hbase4:44593] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44593,1690229459259 2023-07-24 20:11:02,647 INFO [RS:2;jenkins-hbase4:41851] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 20:11:02,648 DEBUG [RS:0;jenkins-hbase4:44593] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x47c987d1 to 127.0.0.1:56754 2023-07-24 20:11:02,648 INFO [RS:2;jenkins-hbase4:41851] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 20:11:02,648 DEBUG [RS:0;jenkins-hbase4:44593] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:11:02,648 INFO [RS:2;jenkins-hbase4:41851] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 20:11:02,648 INFO [RS:1;jenkins-hbase4:35907] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 20:11:02,649 INFO [RS:2;jenkins-hbase4:41851] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-24 20:11:02,649 INFO [RS:1;jenkins-hbase4:35907] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 20:11:02,649 INFO [RS:0;jenkins-hbase4:44593] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44593,1690229459259; all regions closed. 2023-07-24 20:11:02,649 INFO [RS:1;jenkins-hbase4:35907] regionserver.HRegionServer(3305): Received CLOSE for f16f04ba851897df1fa907c1396f37a7 2023-07-24 20:11:02,649 DEBUG [RS:0;jenkins-hbase4:44593] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-24 20:11:02,649 INFO [RS:2;jenkins-hbase4:41851] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-24 20:11:02,649 DEBUG [RS:2;jenkins-hbase4:41851] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-24 20:11:02,649 DEBUG [RS:2;jenkins-hbase4:41851] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-24 20:11:02,650 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 20:11:02,650 INFO [RS:1;jenkins-hbase4:35907] regionserver.HRegionServer(3305): Received CLOSE for 6cbd7fa4cc38a3370728b38f07e85f99 2023-07-24 20:11:02,650 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 20:11:02,650 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 20:11:02,650 INFO [RS:1;jenkins-hbase4:35907] regionserver.HRegionServer(3305): Received CLOSE for 6719584c4c5de5c40cce8453cd4869a9 2023-07-24 20:11:02,650 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 20:11:02,650 INFO [RS:1;jenkins-hbase4:35907] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35907,1690229459316 2023-07-24 20:11:02,650 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 20:11:02,651 DEBUG [RS:1;jenkins-hbase4:35907] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3a43bbc2 to 127.0.0.1:56754 2023-07-24 20:11:02,651 DEBUG [RS:1;jenkins-hbase4:35907] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:11:02,651 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-24 20:11:02,651 INFO [RS:1;jenkins-hbase4:35907] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-07-24 20:11:02,651 DEBUG [RS:1;jenkins-hbase4:35907] regionserver.HRegionServer(1478): Online Regions={f16f04ba851897df1fa907c1396f37a7=hbase:rsgroup,,1690229460708.f16f04ba851897df1fa907c1396f37a7., 6cbd7fa4cc38a3370728b38f07e85f99=hbase:namespace,,1690229460331.6cbd7fa4cc38a3370728b38f07e85f99., 6719584c4c5de5c40cce8453cd4869a9=hbase:quota,,1690229461140.6719584c4c5de5c40cce8453cd4869a9.} 2023-07-24 20:11:02,651 DEBUG [RS:1;jenkins-hbase4:35907] regionserver.HRegionServer(1504): Waiting on 6719584c4c5de5c40cce8453cd4869a9, 6cbd7fa4cc38a3370728b38f07e85f99, f16f04ba851897df1fa907c1396f37a7 2023-07-24 20:11:02,653 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f16f04ba851897df1fa907c1396f37a7, disabling compactions & flushes 2023-07-24 20:11:02,653 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690229460708.f16f04ba851897df1fa907c1396f37a7. 2023-07-24 20:11:02,653 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690229460708.f16f04ba851897df1fa907c1396f37a7. 2023-07-24 20:11:02,653 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690229460708.f16f04ba851897df1fa907c1396f37a7. after waiting 0 ms 2023-07-24 20:11:02,653 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690229460708.f16f04ba851897df1fa907c1396f37a7. 2023-07-24 20:11:02,653 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing f16f04ba851897df1fa907c1396f37a7 1/1 column families, dataSize=633 B heapSize=1.09 KB 2023-07-24 20:11:02,661 DEBUG [RS:0;jenkins-hbase4:44593] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/oldWALs 2023-07-24 20:11:02,661 INFO [RS:0;jenkins-hbase4:44593] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44593%2C1690229459259:(num 1690229459945) 2023-07-24 20:11:02,661 DEBUG [RS:0;jenkins-hbase4:44593] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:11:02,661 INFO [RS:0;jenkins-hbase4:44593] regionserver.LeaseManager(133): Closed leases 2023-07-24 20:11:02,663 INFO [RS:0;jenkins-hbase4:44593] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 20:11:02,663 INFO [RS:0;jenkins-hbase4:44593] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 20:11:02,663 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 20:11:02,663 INFO [RS:0;jenkins-hbase4:44593] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 20:11:02,663 INFO [RS:0;jenkins-hbase4:44593] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 20:11:02,664 INFO [RS:0;jenkins-hbase4:44593] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44593 2023-07-24 20:11:02,679 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740/.tmp/info/ae8b60f93f814131b9c58772f7832f01 2023-07-24 20:11:02,683 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=633 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/rsgroup/f16f04ba851897df1fa907c1396f37a7/.tmp/m/23cbae8669ac45319721954714f17aac 2023-07-24 20:11:02,687 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ae8b60f93f814131b9c58772f7832f01 2023-07-24 20:11:02,692 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/rsgroup/f16f04ba851897df1fa907c1396f37a7/.tmp/m/23cbae8669ac45319721954714f17aac as hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/rsgroup/f16f04ba851897df1fa907c1396f37a7/m/23cbae8669ac45319721954714f17aac 2023-07-24 20:11:02,698 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/rsgroup/f16f04ba851897df1fa907c1396f37a7/m/23cbae8669ac45319721954714f17aac, entries=1, sequenceid=7, filesize=4.9 K 2023-07-24 20:11:02,701 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~633 B/633, heapSize ~1.07 KB/1096, currentSize=0 B/0 for f16f04ba851897df1fa907c1396f37a7 in 48ms, sequenceid=7, compaction requested=false 2023-07-24 20:11:02,701 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-24 20:11:02,710 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/rsgroup/f16f04ba851897df1fa907c1396f37a7/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-24 20:11:02,710 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 20:11:02,711 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 20:11:02,712 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690229460708.f16f04ba851897df1fa907c1396f37a7. 2023-07-24 20:11:02,712 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f16f04ba851897df1fa907c1396f37a7: 2023-07-24 20:11:02,712 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690229460708.f16f04ba851897df1fa907c1396f37a7. 2023-07-24 20:11:02,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6cbd7fa4cc38a3370728b38f07e85f99, disabling compactions & flushes 2023-07-24 20:11:02,713 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690229460331.6cbd7fa4cc38a3370728b38f07e85f99. 2023-07-24 20:11:02,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690229460331.6cbd7fa4cc38a3370728b38f07e85f99. 2023-07-24 20:11:02,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690229460331.6cbd7fa4cc38a3370728b38f07e85f99. after waiting 0 ms 2023-07-24 20:11:02,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690229460331.6cbd7fa4cc38a3370728b38f07e85f99. 2023-07-24 20:11:02,713 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 6cbd7fa4cc38a3370728b38f07e85f99 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-24 20:11:02,714 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740/.tmp/rep_barrier/c7c2079a6c624400bf87da285220186b 2023-07-24 20:11:02,721 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c7c2079a6c624400bf87da285220186b 2023-07-24 20:11:02,734 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/namespace/6cbd7fa4cc38a3370728b38f07e85f99/.tmp/info/73bedff23bc54489a40456b018bb55c2 2023-07-24 20:11:02,739 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740/.tmp/table/fbb05242fcdb45f3a12c583775c1c485 2023-07-24 20:11:02,741 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 73bedff23bc54489a40456b018bb55c2 2023-07-24 20:11:02,741 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/namespace/6cbd7fa4cc38a3370728b38f07e85f99/.tmp/info/73bedff23bc54489a40456b018bb55c2 as hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/namespace/6cbd7fa4cc38a3370728b38f07e85f99/info/73bedff23bc54489a40456b018bb55c2 2023-07-24 20:11:02,744 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fbb05242fcdb45f3a12c583775c1c485 2023-07-24 20:11:02,745 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740/.tmp/info/ae8b60f93f814131b9c58772f7832f01 as hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740/info/ae8b60f93f814131b9c58772f7832f01 2023-07-24 20:11:02,751 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ae8b60f93f814131b9c58772f7832f01 2023-07-24 20:11:02,751 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740/info/ae8b60f93f814131b9c58772f7832f01, entries=32, sequenceid=31, filesize=8.5 K 2023-07-24 20:11:02,751 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 73bedff23bc54489a40456b018bb55c2 2023-07-24 20:11:02,751 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/namespace/6cbd7fa4cc38a3370728b38f07e85f99/info/73bedff23bc54489a40456b018bb55c2, entries=3, sequenceid=8, filesize=5.0 K 2023-07-24 20:11:02,752 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740/.tmp/rep_barrier/c7c2079a6c624400bf87da285220186b as hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740/rep_barrier/c7c2079a6c624400bf87da285220186b 2023-07-24 20:11:02,752 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 6cbd7fa4cc38a3370728b38f07e85f99 in 39ms, sequenceid=8, compaction requested=false 2023-07-24 20:11:02,752 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-24 20:11:02,758 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:41851-0x10198f54bce0003, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44593,1690229459259 2023-07-24 20:11:02,758 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:44593-0x10198f54bce0001, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44593,1690229459259 2023-07-24 20:11:02,758 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:41851-0x10198f54bce0003, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:11:02,758 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:44593-0x10198f54bce0001, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:11:02,758 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:35907-0x10198f54bce0002, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44593,1690229459259 2023-07-24 20:11:02,758 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:11:02,758 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:35907-0x10198f54bce0002, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:11:02,759 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44593,1690229459259] 2023-07-24 20:11:02,760 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44593,1690229459259; numProcessing=1 2023-07-24 20:11:02,760 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/namespace/6cbd7fa4cc38a3370728b38f07e85f99/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-24 20:11:02,761 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44593,1690229459259 already deleted, retry=false 2023-07-24 20:11:02,761 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44593,1690229459259 expired; onlineServers=2 2023-07-24 20:11:02,762 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690229460331.6cbd7fa4cc38a3370728b38f07e85f99. 2023-07-24 20:11:02,762 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6cbd7fa4cc38a3370728b38f07e85f99: 2023-07-24 20:11:02,762 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690229460331.6cbd7fa4cc38a3370728b38f07e85f99. 2023-07-24 20:11:02,762 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6719584c4c5de5c40cce8453cd4869a9, disabling compactions & flushes 2023-07-24 20:11:02,762 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690229461140.6719584c4c5de5c40cce8453cd4869a9. 2023-07-24 20:11:02,762 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690229461140.6719584c4c5de5c40cce8453cd4869a9. 2023-07-24 20:11:02,762 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690229461140.6719584c4c5de5c40cce8453cd4869a9. after waiting 0 ms 2023-07-24 20:11:02,762 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690229461140.6719584c4c5de5c40cce8453cd4869a9. 2023-07-24 20:11:02,763 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c7c2079a6c624400bf87da285220186b 2023-07-24 20:11:02,763 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740/rep_barrier/c7c2079a6c624400bf87da285220186b, entries=1, sequenceid=31, filesize=4.9 K 2023-07-24 20:11:02,764 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740/.tmp/table/fbb05242fcdb45f3a12c583775c1c485 as hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740/table/fbb05242fcdb45f3a12c583775c1c485 2023-07-24 20:11:02,766 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/quota/6719584c4c5de5c40cce8453cd4869a9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 20:11:02,766 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1690229461140.6719584c4c5de5c40cce8453cd4869a9. 2023-07-24 20:11:02,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6719584c4c5de5c40cce8453cd4869a9: 2023-07-24 20:11:02,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1690229461140.6719584c4c5de5c40cce8453cd4869a9. 2023-07-24 20:11:02,770 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fbb05242fcdb45f3a12c583775c1c485 2023-07-24 20:11:02,770 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740/table/fbb05242fcdb45f3a12c583775c1c485, entries=8, sequenceid=31, filesize=5.2 K 2023-07-24 20:11:02,771 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 120ms, sequenceid=31, compaction requested=false 2023-07-24 20:11:02,771 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-24 20:11:02,779 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-24 20:11:02,780 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 20:11:02,781 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 20:11:02,781 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 20:11:02,781 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-24 20:11:02,789 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-24 20:11:02,789 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-24 20:11:02,814 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-24 20:11:02,814 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-24 20:11:02,849 INFO [RS:2;jenkins-hbase4:41851] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41851,1690229459378; all regions closed. 2023-07-24 20:11:02,849 DEBUG [RS:2;jenkins-hbase4:41851] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-24 20:11:02,852 INFO [RS:1;jenkins-hbase4:35907] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35907,1690229459316; all regions closed. 2023-07-24 20:11:02,852 DEBUG [RS:1;jenkins-hbase4:35907] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-24 20:11:02,859 DEBUG [RS:2;jenkins-hbase4:41851] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/oldWALs 2023-07-24 20:11:02,859 INFO [RS:2;jenkins-hbase4:41851] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41851%2C1690229459378.meta:.meta(num 1690229460252) 2023-07-24 20:11:02,862 DEBUG [RS:1;jenkins-hbase4:35907] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/oldWALs 2023-07-24 20:11:02,862 INFO [RS:1;jenkins-hbase4:35907] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35907%2C1690229459316:(num 1690229459946) 2023-07-24 20:11:02,862 DEBUG [RS:1;jenkins-hbase4:35907] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:11:02,862 INFO [RS:1;jenkins-hbase4:35907] regionserver.LeaseManager(133): Closed leases 2023-07-24 20:11:02,862 INFO [RS:1;jenkins-hbase4:35907] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 20:11:02,863 INFO [RS:1;jenkins-hbase4:35907] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 20:11:02,863 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 20:11:02,863 INFO [RS:1;jenkins-hbase4:35907] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 20:11:02,863 INFO [RS:1;jenkins-hbase4:35907] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 20:11:02,864 INFO [RS:1;jenkins-hbase4:35907] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35907 2023-07-24 20:11:02,868 DEBUG [RS:2;jenkins-hbase4:41851] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/oldWALs 2023-07-24 20:11:02,868 INFO [RS:2;jenkins-hbase4:41851] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41851%2C1690229459378:(num 1690229459945) 2023-07-24 20:11:02,868 DEBUG [RS:2;jenkins-hbase4:41851] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:11:02,868 INFO [RS:2;jenkins-hbase4:41851] regionserver.LeaseManager(133): Closed leases 2023-07-24 20:11:02,868 INFO [RS:2;jenkins-hbase4:41851] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 20:11:02,868 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 20:11:02,869 INFO [RS:2;jenkins-hbase4:41851] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41851 2023-07-24 20:11:02,872 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:35907-0x10198f54bce0002, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35907,1690229459316 2023-07-24 20:11:02,872 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:41851-0x10198f54bce0003, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35907,1690229459316 2023-07-24 20:11:02,872 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:11:02,873 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:41851-0x10198f54bce0003, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41851,1690229459378 2023-07-24 20:11:02,873 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:35907-0x10198f54bce0002, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41851,1690229459378 2023-07-24 20:11:02,874 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35907,1690229459316] 2023-07-24 20:11:02,874 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35907,1690229459316; numProcessing=2 2023-07-24 20:11:02,880 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35907,1690229459316 already deleted, retry=false 2023-07-24 20:11:02,880 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35907,1690229459316 expired; onlineServers=1 2023-07-24 20:11:02,880 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41851,1690229459378] 2023-07-24 20:11:02,881 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41851,1690229459378; numProcessing=3 2023-07-24 20:11:02,882 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41851,1690229459378 already deleted, retry=false 2023-07-24 20:11:02,882 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41851,1690229459378 expired; onlineServers=0 2023-07-24 20:11:02,882 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45907,1690229459172' ***** 2023-07-24 20:11:02,882 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-24 20:11:02,882 DEBUG [M:0;jenkins-hbase4:45907] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3718ddf7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 20:11:02,882 INFO [M:0;jenkins-hbase4:45907] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 20:11:02,884 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-24 20:11:02,884 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:11:02,884 INFO [M:0;jenkins-hbase4:45907] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6682e202{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-24 20:11:02,884 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 20:11:02,884 INFO [M:0;jenkins-hbase4:45907] server.AbstractConnector(383): Stopped ServerConnector@857110a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 20:11:02,884 INFO [M:0;jenkins-hbase4:45907] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 20:11:02,885 INFO [M:0;jenkins-hbase4:45907] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5f454535{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 20:11:02,885 INFO [M:0;jenkins-hbase4:45907] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@735f8834{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/hadoop.log.dir/,STOPPED} 2023-07-24 20:11:02,885 INFO [M:0;jenkins-hbase4:45907] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45907,1690229459172 2023-07-24 20:11:02,885 INFO [M:0;jenkins-hbase4:45907] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45907,1690229459172; all regions closed. 2023-07-24 20:11:02,885 DEBUG [M:0;jenkins-hbase4:45907] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:11:02,885 INFO [M:0;jenkins-hbase4:45907] master.HMaster(1491): Stopping master jetty server 2023-07-24 20:11:02,886 INFO [M:0;jenkins-hbase4:45907] server.AbstractConnector(383): Stopped ServerConnector@6687562e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 20:11:02,886 DEBUG [M:0;jenkins-hbase4:45907] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-24 20:11:02,886 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-24 20:11:02,886 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690229459611] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690229459611,5,FailOnTimeoutGroup] 2023-07-24 20:11:02,886 DEBUG [M:0;jenkins-hbase4:45907] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-24 20:11:02,887 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690229459611] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690229459611,5,FailOnTimeoutGroup] 2023-07-24 20:11:02,887 INFO [M:0;jenkins-hbase4:45907] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-24 20:11:02,888 INFO [M:0;jenkins-hbase4:45907] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-24 20:11:02,888 INFO [M:0;jenkins-hbase4:45907] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 20:11:02,888 DEBUG [M:0;jenkins-hbase4:45907] master.HMaster(1512): Stopping service threads 2023-07-24 20:11:02,888 INFO [M:0;jenkins-hbase4:45907] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-24 20:11:02,889 ERROR [M:0;jenkins-hbase4:45907] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-24 20:11:02,889 INFO [M:0;jenkins-hbase4:45907] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-24 20:11:02,889 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-24 20:11:02,889 DEBUG [M:0;jenkins-hbase4:45907] zookeeper.ZKUtil(398): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-24 20:11:02,889 WARN [M:0;jenkins-hbase4:45907] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-24 20:11:02,889 INFO [M:0;jenkins-hbase4:45907] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-24 20:11:02,890 INFO [M:0;jenkins-hbase4:45907] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-24 20:11:02,890 DEBUG [M:0;jenkins-hbase4:45907] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 20:11:02,890 INFO [M:0;jenkins-hbase4:45907] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 20:11:02,891 DEBUG [M:0;jenkins-hbase4:45907] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 20:11:02,891 DEBUG [M:0;jenkins-hbase4:45907] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 20:11:02,891 DEBUG [M:0;jenkins-hbase4:45907] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 20:11:02,891 INFO [M:0;jenkins-hbase4:45907] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.95 KB heapSize=109.10 KB 2023-07-24 20:11:02,903 INFO [M:0;jenkins-hbase4:45907] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.95 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/dd1b5336344c4dbb905f6948911d04dc 2023-07-24 20:11:02,909 DEBUG [M:0;jenkins-hbase4:45907] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/dd1b5336344c4dbb905f6948911d04dc as hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/dd1b5336344c4dbb905f6948911d04dc 2023-07-24 20:11:02,914 INFO [M:0;jenkins-hbase4:45907] regionserver.HStore(1080): Added hdfs://localhost:40043/user/jenkins/test-data/989bff48-2a32-bd5a-75fc-3b0b6a59d423/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/dd1b5336344c4dbb905f6948911d04dc, entries=24, sequenceid=194, filesize=12.4 K 2023-07-24 20:11:02,915 INFO [M:0;jenkins-hbase4:45907] regionserver.HRegion(2948): Finished flush of dataSize ~92.95 KB/95182, heapSize ~109.09 KB/111704, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 24ms, sequenceid=194, compaction requested=false 2023-07-24 20:11:02,916 INFO [M:0;jenkins-hbase4:45907] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 20:11:02,917 DEBUG [M:0;jenkins-hbase4:45907] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 20:11:02,920 INFO [M:0;jenkins-hbase4:45907] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-24 20:11:02,920 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 20:11:02,921 INFO [M:0;jenkins-hbase4:45907] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45907 2023-07-24 20:11:02,923 DEBUG [M:0;jenkins-hbase4:45907] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,45907,1690229459172 already deleted, retry=false 2023-07-24 20:11:03,125 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:11:03,125 INFO [M:0;jenkins-hbase4:45907] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45907,1690229459172; zookeeper connection closed. 2023-07-24 20:11:03,126 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): master:45907-0x10198f54bce0000, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:11:03,226 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:41851-0x10198f54bce0003, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:11:03,226 INFO [RS:2;jenkins-hbase4:41851] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41851,1690229459378; zookeeper connection closed. 2023-07-24 20:11:03,226 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:41851-0x10198f54bce0003, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:11:03,227 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@12df902f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@12df902f 2023-07-24 20:11:03,326 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:35907-0x10198f54bce0002, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:11:03,326 INFO [RS:1;jenkins-hbase4:35907] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35907,1690229459316; zookeeper connection closed. 2023-07-24 20:11:03,326 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:35907-0x10198f54bce0002, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:11:03,326 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1e159285] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1e159285 2023-07-24 20:11:03,426 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:44593-0x10198f54bce0001, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:11:03,426 INFO [RS:0;jenkins-hbase4:44593] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44593,1690229459259; zookeeper connection closed. 2023-07-24 20:11:03,426 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): regionserver:44593-0x10198f54bce0001, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:11:03,426 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5c656967] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5c656967 2023-07-24 20:11:03,426 INFO [Listener at localhost/33155] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-24 20:11:03,427 WARN [Listener at localhost/33155] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 20:11:03,430 INFO [Listener at localhost/33155] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 20:11:03,535 WARN [BP-1187978853-172.31.14.131-1690229458297 heartbeating to localhost/127.0.0.1:40043] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 20:11:03,535 WARN [BP-1187978853-172.31.14.131-1690229458297 heartbeating to localhost/127.0.0.1:40043] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1187978853-172.31.14.131-1690229458297 (Datanode Uuid 122e2113-359c-4528-a04a-4ba857e45fcf) service to localhost/127.0.0.1:40043 2023-07-24 20:11:03,536 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/cluster_89b28d99-38e6-d0af-6c21-38d3021d86a4/dfs/data/data5/current/BP-1187978853-172.31.14.131-1690229458297] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 20:11:03,536 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/cluster_89b28d99-38e6-d0af-6c21-38d3021d86a4/dfs/data/data6/current/BP-1187978853-172.31.14.131-1690229458297] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 20:11:03,537 WARN [Listener at localhost/33155] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 20:11:03,540 INFO [Listener at localhost/33155] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 20:11:03,646 WARN [BP-1187978853-172.31.14.131-1690229458297 heartbeating to localhost/127.0.0.1:40043] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 20:11:03,646 WARN [BP-1187978853-172.31.14.131-1690229458297 heartbeating to localhost/127.0.0.1:40043] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1187978853-172.31.14.131-1690229458297 (Datanode Uuid 1acf5df3-252d-4e9d-8302-c699363d9bd3) service to localhost/127.0.0.1:40043 2023-07-24 20:11:03,647 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/cluster_89b28d99-38e6-d0af-6c21-38d3021d86a4/dfs/data/data3/current/BP-1187978853-172.31.14.131-1690229458297] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 20:11:03,648 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/cluster_89b28d99-38e6-d0af-6c21-38d3021d86a4/dfs/data/data4/current/BP-1187978853-172.31.14.131-1690229458297] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 20:11:03,652 WARN [Listener at localhost/33155] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 20:11:03,657 INFO [Listener at localhost/33155] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 20:11:03,761 WARN [BP-1187978853-172.31.14.131-1690229458297 heartbeating to localhost/127.0.0.1:40043] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 20:11:03,761 WARN [BP-1187978853-172.31.14.131-1690229458297 heartbeating to localhost/127.0.0.1:40043] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1187978853-172.31.14.131-1690229458297 (Datanode Uuid 473c608b-83c5-4a43-8f6f-aeb3de0599ac) service to localhost/127.0.0.1:40043 2023-07-24 20:11:03,761 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/cluster_89b28d99-38e6-d0af-6c21-38d3021d86a4/dfs/data/data1/current/BP-1187978853-172.31.14.131-1690229458297] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 20:11:03,762 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/cluster_89b28d99-38e6-d0af-6c21-38d3021d86a4/dfs/data/data2/current/BP-1187978853-172.31.14.131-1690229458297] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 20:11:03,771 INFO [Listener at localhost/33155] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 20:11:03,886 INFO [Listener at localhost/33155] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-24 20:11:03,913 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-24 20:11:03,913 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-24 20:11:03,913 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/hadoop.log.dir so I do NOT create it in target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48 2023-07-24 20:11:03,913 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0258a65e-82a0-c08f-9d49-07a5cb15c799/hadoop.tmp.dir so I do NOT create it in target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48 2023-07-24 20:11:03,913 INFO [Listener at localhost/33155] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/cluster_90be22ae-8281-7d6a-0dfd-0edb5b842584, deleteOnExit=true 2023-07-24 20:11:03,913 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-24 20:11:03,913 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/test.cache.data in system properties and HBase conf 2023-07-24 20:11:03,913 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/hadoop.tmp.dir in system properties and HBase conf 2023-07-24 20:11:03,913 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/hadoop.log.dir in system properties and HBase conf 2023-07-24 20:11:03,913 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-24 20:11:03,914 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-24 20:11:03,914 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-24 20:11:03,914 DEBUG [Listener at localhost/33155] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-24 20:11:03,914 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-24 20:11:03,914 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-24 20:11:03,914 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-24 20:11:03,914 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 20:11:03,914 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-24 20:11:03,915 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-24 20:11:03,915 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 20:11:03,915 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 20:11:03,915 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-24 20:11:03,915 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/nfs.dump.dir in system properties and HBase conf 2023-07-24 20:11:03,915 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/java.io.tmpdir in system properties and HBase conf 2023-07-24 20:11:03,915 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 20:11:03,915 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-24 20:11:03,915 INFO [Listener at localhost/33155] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-24 20:11:03,919 WARN [Listener at localhost/33155] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 20:11:03,919 WARN [Listener at localhost/33155] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 20:11:03,959 WARN [Listener at localhost/33155] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 20:11:03,961 INFO [Listener at localhost/33155] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 20:11:03,967 INFO [Listener at localhost/33155] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/java.io.tmpdir/Jetty_localhost_36689_hdfs____f2a7fi/webapp 2023-07-24 20:11:03,984 DEBUG [Listener at localhost/33155-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x10198f54bce000a, quorum=127.0.0.1:56754, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-24 20:11:03,984 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x10198f54bce000a, quorum=127.0.0.1:56754, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-24 20:11:04,060 INFO [Listener at localhost/33155] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36689 2023-07-24 20:11:04,064 WARN [Listener at localhost/33155] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 20:11:04,064 WARN [Listener at localhost/33155] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 20:11:04,103 WARN [Listener at localhost/39673] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 20:11:04,115 WARN [Listener at localhost/39673] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 20:11:04,117 WARN [Listener at localhost/39673] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 20:11:04,118 INFO [Listener at localhost/39673] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 20:11:04,123 INFO [Listener at localhost/39673] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/java.io.tmpdir/Jetty_localhost_43913_datanode____.nt7yzh/webapp 2023-07-24 20:11:04,222 INFO [Listener at localhost/39673] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43913 2023-07-24 20:11:04,234 WARN [Listener at localhost/39813] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 20:11:04,261 WARN [Listener at localhost/39813] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 20:11:04,265 WARN [Listener at localhost/39813] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 20:11:04,267 INFO [Listener at localhost/39813] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 20:11:04,274 INFO [Listener at localhost/39813] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/java.io.tmpdir/Jetty_localhost_39493_datanode____.4a0c1d/webapp 2023-07-24 20:11:04,399 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x82d1edbabf7d977d: Processing first storage report for DS-d7ddee52-5373-4544-a776-a93e442a51de from datanode 2562a75e-9e92-4d13-a5a8-bb4c901f3fa2 2023-07-24 20:11:04,399 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x82d1edbabf7d977d: from storage DS-d7ddee52-5373-4544-a776-a93e442a51de node DatanodeRegistration(127.0.0.1:33623, datanodeUuid=2562a75e-9e92-4d13-a5a8-bb4c901f3fa2, infoPort=37323, infoSecurePort=0, ipcPort=39813, storageInfo=lv=-57;cid=testClusterID;nsid=2003837932;c=1690229463922), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 20:11:04,399 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x82d1edbabf7d977d: Processing first storage report for DS-f511586a-fce5-4f26-8cfa-aab6853fa172 from datanode 2562a75e-9e92-4d13-a5a8-bb4c901f3fa2 2023-07-24 20:11:04,399 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x82d1edbabf7d977d: from storage DS-f511586a-fce5-4f26-8cfa-aab6853fa172 node DatanodeRegistration(127.0.0.1:33623, datanodeUuid=2562a75e-9e92-4d13-a5a8-bb4c901f3fa2, infoPort=37323, infoSecurePort=0, ipcPort=39813, storageInfo=lv=-57;cid=testClusterID;nsid=2003837932;c=1690229463922), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 20:11:04,411 INFO [Listener at localhost/39813] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39493 2023-07-24 20:11:04,423 WARN [Listener at localhost/34539] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 20:11:04,445 WARN [Listener at localhost/34539] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 20:11:04,448 WARN [Listener at localhost/34539] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 20:11:04,449 INFO [Listener at localhost/34539] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 20:11:04,453 INFO [Listener at localhost/34539] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/java.io.tmpdir/Jetty_localhost_44181_datanode____3l7ck1/webapp 2023-07-24 20:11:04,546 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x335e4d7eea7c06f4: Processing first storage report for DS-b414c051-d105-46ff-83bd-6b34e4db37d0 from datanode 185844a4-f2a5-4d8b-920e-92c87ff812f7 2023-07-24 20:11:04,546 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x335e4d7eea7c06f4: from storage DS-b414c051-d105-46ff-83bd-6b34e4db37d0 node DatanodeRegistration(127.0.0.1:36505, datanodeUuid=185844a4-f2a5-4d8b-920e-92c87ff812f7, infoPort=41651, infoSecurePort=0, ipcPort=34539, storageInfo=lv=-57;cid=testClusterID;nsid=2003837932;c=1690229463922), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 20:11:04,546 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x335e4d7eea7c06f4: Processing first storage report for DS-ca39df1c-3b3b-4090-81b6-780a1585a803 from datanode 185844a4-f2a5-4d8b-920e-92c87ff812f7 2023-07-24 20:11:04,546 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x335e4d7eea7c06f4: from storage DS-ca39df1c-3b3b-4090-81b6-780a1585a803 node DatanodeRegistration(127.0.0.1:36505, datanodeUuid=185844a4-f2a5-4d8b-920e-92c87ff812f7, infoPort=41651, infoSecurePort=0, ipcPort=34539, storageInfo=lv=-57;cid=testClusterID;nsid=2003837932;c=1690229463922), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 20:11:04,559 INFO [Listener at localhost/34539] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44181 2023-07-24 20:11:04,568 WARN [Listener at localhost/44137] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 20:11:04,665 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x47b17d367ca0fbc4: Processing first storage report for DS-a1dc3f7c-998a-44b7-baa0-f422c70b9d26 from datanode 41719b79-33e0-404f-8b74-6c1ff3cb401d 2023-07-24 20:11:04,665 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x47b17d367ca0fbc4: from storage DS-a1dc3f7c-998a-44b7-baa0-f422c70b9d26 node DatanodeRegistration(127.0.0.1:37057, datanodeUuid=41719b79-33e0-404f-8b74-6c1ff3cb401d, infoPort=36959, infoSecurePort=0, ipcPort=44137, storageInfo=lv=-57;cid=testClusterID;nsid=2003837932;c=1690229463922), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 20:11:04,665 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x47b17d367ca0fbc4: Processing first storage report for DS-95826d62-2e47-4804-abd4-4aba8aa8ed06 from datanode 41719b79-33e0-404f-8b74-6c1ff3cb401d 2023-07-24 20:11:04,665 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x47b17d367ca0fbc4: from storage DS-95826d62-2e47-4804-abd4-4aba8aa8ed06 node DatanodeRegistration(127.0.0.1:37057, datanodeUuid=41719b79-33e0-404f-8b74-6c1ff3cb401d, infoPort=36959, infoSecurePort=0, ipcPort=44137, storageInfo=lv=-57;cid=testClusterID;nsid=2003837932;c=1690229463922), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 20:11:04,674 DEBUG [Listener at localhost/44137] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48 2023-07-24 20:11:04,676 INFO [Listener at localhost/44137] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/cluster_90be22ae-8281-7d6a-0dfd-0edb5b842584/zookeeper_0, clientPort=60607, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/cluster_90be22ae-8281-7d6a-0dfd-0edb5b842584/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/cluster_90be22ae-8281-7d6a-0dfd-0edb5b842584/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-24 20:11:04,677 INFO [Listener at localhost/44137] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=60607 2023-07-24 20:11:04,677 INFO [Listener at localhost/44137] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:11:04,678 INFO [Listener at localhost/44137] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:11:04,694 INFO [Listener at localhost/44137] util.FSUtils(471): Created version file at hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0 with version=8 2023-07-24 20:11:04,694 INFO [Listener at localhost/44137] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:32791/user/jenkins/test-data/c5565b34-1b14-d1c0-bed5-ba945a73efd1/hbase-staging 2023-07-24 20:11:04,695 DEBUG [Listener at localhost/44137] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-24 20:11:04,695 DEBUG [Listener at localhost/44137] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-24 20:11:04,695 DEBUG [Listener at localhost/44137] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-24 20:11:04,695 DEBUG [Listener at localhost/44137] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-24 20:11:04,696 INFO [Listener at localhost/44137] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 20:11:04,696 INFO [Listener at localhost/44137] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:11:04,696 INFO [Listener at localhost/44137] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 20:11:04,696 INFO [Listener at localhost/44137] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 20:11:04,696 INFO [Listener at localhost/44137] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:11:04,696 INFO [Listener at localhost/44137] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 20:11:04,696 INFO [Listener at localhost/44137] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 20:11:04,697 INFO [Listener at localhost/44137] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34595 2023-07-24 20:11:04,698 INFO [Listener at localhost/44137] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:11:04,699 INFO [Listener at localhost/44137] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:11:04,699 INFO [Listener at localhost/44137] zookeeper.RecoverableZooKeeper(93): Process identifier=master:34595 connecting to ZooKeeper ensemble=127.0.0.1:60607 2023-07-24 20:11:04,707 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:345950x0, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 20:11:04,708 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:34595-0x10198f561670000 connected 2023-07-24 20:11:04,722 DEBUG [Listener at localhost/44137] zookeeper.ZKUtil(164): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 20:11:04,723 DEBUG [Listener at localhost/44137] zookeeper.ZKUtil(164): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 20:11:04,723 DEBUG [Listener at localhost/44137] zookeeper.ZKUtil(164): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 20:11:04,723 DEBUG [Listener at localhost/44137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34595 2023-07-24 20:11:04,724 DEBUG [Listener at localhost/44137] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34595 2023-07-24 20:11:04,724 DEBUG [Listener at localhost/44137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34595 2023-07-24 20:11:04,724 DEBUG [Listener at localhost/44137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34595 2023-07-24 20:11:04,724 DEBUG [Listener at localhost/44137] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34595 2023-07-24 20:11:04,726 INFO [Listener at localhost/44137] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 20:11:04,726 INFO [Listener at localhost/44137] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 20:11:04,726 INFO [Listener at localhost/44137] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 20:11:04,727 INFO [Listener at localhost/44137] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-24 20:11:04,727 INFO [Listener at localhost/44137] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 20:11:04,727 INFO [Listener at localhost/44137] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 20:11:04,727 INFO [Listener at localhost/44137] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 20:11:04,727 INFO [Listener at localhost/44137] http.HttpServer(1146): Jetty bound to port 41735 2023-07-24 20:11:04,728 INFO [Listener at localhost/44137] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 20:11:04,728 INFO [Listener at localhost/44137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:11:04,729 INFO [Listener at localhost/44137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@576b5e7e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/hadoop.log.dir/,AVAILABLE} 2023-07-24 20:11:04,729 INFO [Listener at localhost/44137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:11:04,729 INFO [Listener at localhost/44137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@23b1628c{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 20:11:04,734 INFO [Listener at localhost/44137] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 20:11:04,735 INFO [Listener at localhost/44137] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 20:11:04,735 INFO [Listener at localhost/44137] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 20:11:04,735 INFO [Listener at localhost/44137] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 20:11:04,736 INFO [Listener at localhost/44137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:11:04,737 INFO [Listener at localhost/44137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@52f44e2f{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-24 20:11:04,738 INFO [Listener at localhost/44137] server.AbstractConnector(333): Started ServerConnector@1902cfed{HTTP/1.1, (http/1.1)}{0.0.0.0:41735} 2023-07-24 20:11:04,738 INFO [Listener at localhost/44137] server.Server(415): Started @40692ms 2023-07-24 20:11:04,738 INFO [Listener at localhost/44137] master.HMaster(444): hbase.rootdir=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0, hbase.cluster.distributed=false 2023-07-24 20:11:04,751 INFO [Listener at localhost/44137] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 20:11:04,751 INFO [Listener at localhost/44137] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:11:04,751 INFO [Listener at localhost/44137] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 20:11:04,752 INFO [Listener at localhost/44137] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 20:11:04,752 INFO [Listener at localhost/44137] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:11:04,752 INFO [Listener at localhost/44137] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 20:11:04,752 INFO [Listener at localhost/44137] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 20:11:04,752 INFO [Listener at localhost/44137] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43627 2023-07-24 20:11:04,753 INFO [Listener at localhost/44137] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 20:11:04,754 DEBUG [Listener at localhost/44137] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 20:11:04,755 INFO [Listener at localhost/44137] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:11:04,756 INFO [Listener at localhost/44137] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:11:04,757 INFO [Listener at localhost/44137] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43627 connecting to ZooKeeper ensemble=127.0.0.1:60607 2023-07-24 20:11:04,760 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:436270x0, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 20:11:04,762 DEBUG [Listener at localhost/44137] zookeeper.ZKUtil(164): regionserver:436270x0, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 20:11:04,762 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43627-0x10198f561670001 connected 2023-07-24 20:11:04,762 DEBUG [Listener at localhost/44137] zookeeper.ZKUtil(164): regionserver:43627-0x10198f561670001, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 20:11:04,763 DEBUG [Listener at localhost/44137] zookeeper.ZKUtil(164): regionserver:43627-0x10198f561670001, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 20:11:04,766 DEBUG [Listener at localhost/44137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43627 2023-07-24 20:11:04,766 DEBUG [Listener at localhost/44137] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43627 2023-07-24 20:11:04,767 DEBUG [Listener at localhost/44137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43627 2023-07-24 20:11:04,767 DEBUG [Listener at localhost/44137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43627 2023-07-24 20:11:04,768 DEBUG [Listener at localhost/44137] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43627 2023-07-24 20:11:04,769 INFO [Listener at localhost/44137] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 20:11:04,769 INFO [Listener at localhost/44137] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 20:11:04,769 INFO [Listener at localhost/44137] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 20:11:04,770 INFO [Listener at localhost/44137] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 20:11:04,770 INFO [Listener at localhost/44137] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 20:11:04,770 INFO [Listener at localhost/44137] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 20:11:04,770 INFO [Listener at localhost/44137] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 20:11:04,771 INFO [Listener at localhost/44137] http.HttpServer(1146): Jetty bound to port 33849 2023-07-24 20:11:04,771 INFO [Listener at localhost/44137] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 20:11:04,772 INFO [Listener at localhost/44137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:11:04,772 INFO [Listener at localhost/44137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@463f05bf{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/hadoop.log.dir/,AVAILABLE} 2023-07-24 20:11:04,773 INFO [Listener at localhost/44137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:11:04,773 INFO [Listener at localhost/44137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@f1a837d{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 20:11:04,777 INFO [Listener at localhost/44137] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 20:11:04,778 INFO [Listener at localhost/44137] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 20:11:04,778 INFO [Listener at localhost/44137] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 20:11:04,778 INFO [Listener at localhost/44137] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 20:11:04,779 INFO [Listener at localhost/44137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:11:04,779 INFO [Listener at localhost/44137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@48ff2726{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 20:11:04,780 INFO [Listener at localhost/44137] server.AbstractConnector(333): Started ServerConnector@2b30ac8{HTTP/1.1, (http/1.1)}{0.0.0.0:33849} 2023-07-24 20:11:04,781 INFO [Listener at localhost/44137] server.Server(415): Started @40735ms 2023-07-24 20:11:04,792 INFO [Listener at localhost/44137] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 20:11:04,792 INFO [Listener at localhost/44137] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:11:04,792 INFO [Listener at localhost/44137] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 20:11:04,792 INFO [Listener at localhost/44137] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 20:11:04,792 INFO [Listener at localhost/44137] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:11:04,793 INFO [Listener at localhost/44137] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 20:11:04,793 INFO [Listener at localhost/44137] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 20:11:04,793 INFO [Listener at localhost/44137] ipc.NettyRpcServer(120): Bind to /172.31.14.131:32969 2023-07-24 20:11:04,794 INFO [Listener at localhost/44137] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 20:11:04,795 DEBUG [Listener at localhost/44137] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 20:11:04,795 INFO [Listener at localhost/44137] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:11:04,796 INFO [Listener at localhost/44137] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:11:04,797 INFO [Listener at localhost/44137] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:32969 connecting to ZooKeeper ensemble=127.0.0.1:60607 2023-07-24 20:11:04,801 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:329690x0, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 20:11:04,802 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:32969-0x10198f561670002 connected 2023-07-24 20:11:04,802 DEBUG [Listener at localhost/44137] zookeeper.ZKUtil(164): regionserver:32969-0x10198f561670002, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 20:11:04,803 DEBUG [Listener at localhost/44137] zookeeper.ZKUtil(164): regionserver:32969-0x10198f561670002, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 20:11:04,803 DEBUG [Listener at localhost/44137] zookeeper.ZKUtil(164): regionserver:32969-0x10198f561670002, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 20:11:04,803 DEBUG [Listener at localhost/44137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=32969 2023-07-24 20:11:04,804 DEBUG [Listener at localhost/44137] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=32969 2023-07-24 20:11:04,804 DEBUG [Listener at localhost/44137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=32969 2023-07-24 20:11:04,804 DEBUG [Listener at localhost/44137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=32969 2023-07-24 20:11:04,804 DEBUG [Listener at localhost/44137] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=32969 2023-07-24 20:11:04,806 INFO [Listener at localhost/44137] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 20:11:04,806 INFO [Listener at localhost/44137] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 20:11:04,806 INFO [Listener at localhost/44137] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 20:11:04,807 INFO [Listener at localhost/44137] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 20:11:04,807 INFO [Listener at localhost/44137] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 20:11:04,807 INFO [Listener at localhost/44137] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 20:11:04,807 INFO [Listener at localhost/44137] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 20:11:04,808 INFO [Listener at localhost/44137] http.HttpServer(1146): Jetty bound to port 42945 2023-07-24 20:11:04,808 INFO [Listener at localhost/44137] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 20:11:04,809 INFO [Listener at localhost/44137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:11:04,809 INFO [Listener at localhost/44137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5021e219{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/hadoop.log.dir/,AVAILABLE} 2023-07-24 20:11:04,809 INFO [Listener at localhost/44137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:11:04,809 INFO [Listener at localhost/44137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@174373f0{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 20:11:04,813 INFO [Listener at localhost/44137] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 20:11:04,814 INFO [Listener at localhost/44137] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 20:11:04,814 INFO [Listener at localhost/44137] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 20:11:04,814 INFO [Listener at localhost/44137] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 20:11:04,815 INFO [Listener at localhost/44137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:11:04,815 INFO [Listener at localhost/44137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4b90518a{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 20:11:04,817 INFO [Listener at localhost/44137] server.AbstractConnector(333): Started ServerConnector@41c3303a{HTTP/1.1, (http/1.1)}{0.0.0.0:42945} 2023-07-24 20:11:04,818 INFO [Listener at localhost/44137] server.Server(415): Started @40772ms 2023-07-24 20:11:04,829 INFO [Listener at localhost/44137] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 20:11:04,829 INFO [Listener at localhost/44137] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:11:04,829 INFO [Listener at localhost/44137] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 20:11:04,829 INFO [Listener at localhost/44137] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 20:11:04,829 INFO [Listener at localhost/44137] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:11:04,829 INFO [Listener at localhost/44137] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 20:11:04,829 INFO [Listener at localhost/44137] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 20:11:04,830 INFO [Listener at localhost/44137] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43869 2023-07-24 20:11:04,831 INFO [Listener at localhost/44137] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 20:11:04,832 DEBUG [Listener at localhost/44137] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 20:11:04,832 INFO [Listener at localhost/44137] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:11:04,833 INFO [Listener at localhost/44137] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:11:04,834 INFO [Listener at localhost/44137] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43869 connecting to ZooKeeper ensemble=127.0.0.1:60607 2023-07-24 20:11:04,839 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:438690x0, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 20:11:04,840 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43869-0x10198f561670003 connected 2023-07-24 20:11:04,840 DEBUG [Listener at localhost/44137] zookeeper.ZKUtil(164): regionserver:43869-0x10198f561670003, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 20:11:04,841 DEBUG [Listener at localhost/44137] zookeeper.ZKUtil(164): regionserver:43869-0x10198f561670003, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 20:11:04,841 DEBUG [Listener at localhost/44137] zookeeper.ZKUtil(164): regionserver:43869-0x10198f561670003, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 20:11:04,842 DEBUG [Listener at localhost/44137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43869 2023-07-24 20:11:04,842 DEBUG [Listener at localhost/44137] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43869 2023-07-24 20:11:04,842 DEBUG [Listener at localhost/44137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43869 2023-07-24 20:11:04,842 DEBUG [Listener at localhost/44137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43869 2023-07-24 20:11:04,843 DEBUG [Listener at localhost/44137] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43869 2023-07-24 20:11:04,844 INFO [Listener at localhost/44137] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 20:11:04,844 INFO [Listener at localhost/44137] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 20:11:04,844 INFO [Listener at localhost/44137] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 20:11:04,845 INFO [Listener at localhost/44137] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 20:11:04,845 INFO [Listener at localhost/44137] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 20:11:04,845 INFO [Listener at localhost/44137] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 20:11:04,845 INFO [Listener at localhost/44137] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 20:11:04,846 INFO [Listener at localhost/44137] http.HttpServer(1146): Jetty bound to port 37123 2023-07-24 20:11:04,846 INFO [Listener at localhost/44137] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 20:11:04,847 INFO [Listener at localhost/44137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:11:04,847 INFO [Listener at localhost/44137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@17ba8aa9{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/hadoop.log.dir/,AVAILABLE} 2023-07-24 20:11:04,848 INFO [Listener at localhost/44137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:11:04,848 INFO [Listener at localhost/44137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5aabd8da{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 20:11:04,853 INFO [Listener at localhost/44137] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 20:11:04,854 INFO [Listener at localhost/44137] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 20:11:04,854 INFO [Listener at localhost/44137] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 20:11:04,854 INFO [Listener at localhost/44137] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 20:11:04,855 INFO [Listener at localhost/44137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:11:04,856 INFO [Listener at localhost/44137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@761c23c7{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 20:11:04,857 INFO [Listener at localhost/44137] server.AbstractConnector(333): Started ServerConnector@24c740ab{HTTP/1.1, (http/1.1)}{0.0.0.0:37123} 2023-07-24 20:11:04,858 INFO [Listener at localhost/44137] server.Server(415): Started @40812ms 2023-07-24 20:11:04,860 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 20:11:04,864 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@2171fde7{HTTP/1.1, (http/1.1)}{0.0.0.0:37787} 2023-07-24 20:11:04,864 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @40818ms 2023-07-24 20:11:04,864 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,34595,1690229464695 2023-07-24 20:11:04,866 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 20:11:04,866 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,34595,1690229464695 2023-07-24 20:11:04,868 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:43627-0x10198f561670001, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 20:11:04,868 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:43869-0x10198f561670003, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 20:11:04,868 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 20:11:04,869 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:11:04,868 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:32969-0x10198f561670002, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 20:11:04,871 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 20:11:04,871 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,34595,1690229464695 from backup master directory 2023-07-24 20:11:04,872 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 20:11:04,873 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,34595,1690229464695 2023-07-24 20:11:04,873 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 20:11:04,873 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 20:11:04,873 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,34595,1690229464695 2023-07-24 20:11:04,894 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/hbase.id with ID: 77f9e1c8-839e-4654-99af-bf7f9a811793 2023-07-24 20:11:04,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:11:04,912 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:11:04,927 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x43ff456d to 127.0.0.1:60607 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 20:11:04,932 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2575c00e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 20:11:04,932 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 20:11:04,933 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-24 20:11:04,933 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 20:11:04,936 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/MasterData/data/master/store-tmp 2023-07-24 20:11:04,946 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:11:04,947 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 20:11:04,947 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 20:11:04,947 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 20:11:04,947 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 20:11:04,947 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 20:11:04,947 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 20:11:04,947 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 20:11:04,948 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/MasterData/WALs/jenkins-hbase4.apache.org,34595,1690229464695 2023-07-24 20:11:04,951 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34595%2C1690229464695, suffix=, logDir=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/MasterData/WALs/jenkins-hbase4.apache.org,34595,1690229464695, archiveDir=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/MasterData/oldWALs, maxLogs=10 2023-07-24 20:11:04,972 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37057,DS-a1dc3f7c-998a-44b7-baa0-f422c70b9d26,DISK] 2023-07-24 20:11:04,972 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36505,DS-b414c051-d105-46ff-83bd-6b34e4db37d0,DISK] 2023-07-24 20:11:04,974 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33623,DS-d7ddee52-5373-4544-a776-a93e442a51de,DISK] 2023-07-24 20:11:04,976 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/MasterData/WALs/jenkins-hbase4.apache.org,34595,1690229464695/jenkins-hbase4.apache.org%2C34595%2C1690229464695.1690229464952 2023-07-24 20:11:04,977 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37057,DS-a1dc3f7c-998a-44b7-baa0-f422c70b9d26,DISK], DatanodeInfoWithStorage[127.0.0.1:33623,DS-d7ddee52-5373-4544-a776-a93e442a51de,DISK], DatanodeInfoWithStorage[127.0.0.1:36505,DS-b414c051-d105-46ff-83bd-6b34e4db37d0,DISK]] 2023-07-24 20:11:04,977 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:11:04,977 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:11:04,977 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 20:11:04,977 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 20:11:04,979 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-24 20:11:04,980 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-24 20:11:04,980 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-24 20:11:04,981 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:11:04,981 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 20:11:04,982 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 20:11:04,984 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 20:11:04,986 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:11:04,986 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9657009280, jitterRate=-0.10062092542648315}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:11:04,986 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 20:11:04,986 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-24 20:11:04,987 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-24 20:11:04,987 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-24 20:11:04,988 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-24 20:11:04,988 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-24 20:11:04,988 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-24 20:11:04,988 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-24 20:11:04,989 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-24 20:11:04,990 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-24 20:11:04,991 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-24 20:11:04,991 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-24 20:11:04,991 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-24 20:11:04,996 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:11:04,996 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-24 20:11:04,997 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-24 20:11:04,997 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-24 20:11:04,999 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:32969-0x10198f561670002, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 20:11:04,999 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:43627-0x10198f561670001, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 20:11:04,999 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:43869-0x10198f561670003, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 20:11:04,999 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 20:11:04,999 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:11:04,999 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,34595,1690229464695, sessionid=0x10198f561670000, setting cluster-up flag (Was=false) 2023-07-24 20:11:05,004 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:11:05,010 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-24 20:11:05,010 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34595,1690229464695 2023-07-24 20:11:05,013 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:11:05,018 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-24 20:11:05,019 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34595,1690229464695 2023-07-24 20:11:05,020 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/.hbase-snapshot/.tmp 2023-07-24 20:11:05,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-24 20:11:05,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-24 20:11:05,021 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-24 20:11:05,022 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34595,1690229464695] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 20:11:05,022 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-24 20:11:05,022 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-24 20:11:05,033 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 20:11:05,033 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 20:11:05,034 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 20:11:05,034 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 20:11:05,034 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 20:11:05,034 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 20:11:05,034 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 20:11:05,034 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 20:11:05,034 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-24 20:11:05,034 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,034 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 20:11:05,034 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,035 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690229495035 2023-07-24 20:11:05,036 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-24 20:11:05,036 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-24 20:11:05,036 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-24 20:11:05,036 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-24 20:11:05,036 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-24 20:11:05,036 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-24 20:11:05,036 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 20:11:05,036 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,036 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-24 20:11:05,037 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-24 20:11:05,037 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-24 20:11:05,037 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-24 20:11:05,037 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-24 20:11:05,038 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-24 20:11:05,038 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690229465038,5,FailOnTimeoutGroup] 2023-07-24 20:11:05,038 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690229465038,5,FailOnTimeoutGroup] 2023-07-24 20:11:05,038 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,038 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-24 20:11:05,038 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,038 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,038 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 20:11:05,048 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 20:11:05,049 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 20:11:05,049 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0 2023-07-24 20:11:05,057 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:11:05,058 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 20:11:05,060 INFO [RS:0;jenkins-hbase4:43627] regionserver.HRegionServer(951): ClusterId : 77f9e1c8-839e-4654-99af-bf7f9a811793 2023-07-24 20:11:05,060 INFO [RS:1;jenkins-hbase4:32969] regionserver.HRegionServer(951): ClusterId : 77f9e1c8-839e-4654-99af-bf7f9a811793 2023-07-24 20:11:05,060 DEBUG [RS:0;jenkins-hbase4:43627] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 20:11:05,060 INFO [RS:2;jenkins-hbase4:43869] regionserver.HRegionServer(951): ClusterId : 77f9e1c8-839e-4654-99af-bf7f9a811793 2023-07-24 20:11:05,060 DEBUG [RS:1;jenkins-hbase4:32969] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 20:11:05,060 DEBUG [RS:2;jenkins-hbase4:43869] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 20:11:05,060 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740/info 2023-07-24 20:11:05,061 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 20:11:05,061 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:11:05,062 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 20:11:05,063 DEBUG [RS:0;jenkins-hbase4:43627] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 20:11:05,063 DEBUG [RS:0;jenkins-hbase4:43627] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 20:11:05,063 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740/rep_barrier 2023-07-24 20:11:05,063 DEBUG [RS:2;jenkins-hbase4:43869] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 20:11:05,063 DEBUG [RS:1;jenkins-hbase4:32969] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 20:11:05,063 DEBUG [RS:1;jenkins-hbase4:32969] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 20:11:05,063 DEBUG [RS:2;jenkins-hbase4:43869] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 20:11:05,063 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 20:11:05,064 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:11:05,064 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 20:11:05,065 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740/table 2023-07-24 20:11:05,065 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 20:11:05,066 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:11:05,066 DEBUG [RS:0;jenkins-hbase4:43627] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 20:11:05,068 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740 2023-07-24 20:11:05,068 DEBUG [RS:0;jenkins-hbase4:43627] zookeeper.ReadOnlyZKClient(139): Connect 0x6c555cba to 127.0.0.1:60607 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 20:11:05,068 DEBUG [RS:1;jenkins-hbase4:32969] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 20:11:05,069 DEBUG [RS:2;jenkins-hbase4:43869] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 20:11:05,070 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740 2023-07-24 20:11:05,072 DEBUG [RS:1;jenkins-hbase4:32969] zookeeper.ReadOnlyZKClient(139): Connect 0x3e956e84 to 127.0.0.1:60607 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 20:11:05,072 DEBUG [RS:2;jenkins-hbase4:43869] zookeeper.ReadOnlyZKClient(139): Connect 0x035458c9 to 127.0.0.1:60607 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 20:11:05,076 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 20:11:05,078 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 20:11:05,084 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:11:05,084 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10361652000, jitterRate=-0.03499595820903778}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 20:11:05,085 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 20:11:05,085 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 20:11:05,085 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 20:11:05,085 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 20:11:05,085 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 20:11:05,085 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 20:11:05,085 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 20:11:05,085 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 20:11:05,085 DEBUG [RS:0;jenkins-hbase4:43627] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6292af40, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 20:11:05,085 DEBUG [RS:0;jenkins-hbase4:43627] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@66ad37ca, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 20:11:05,086 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 20:11:05,086 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-24 20:11:05,086 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-24 20:11:05,087 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-24 20:11:05,088 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-24 20:11:05,091 DEBUG [RS:1;jenkins-hbase4:32969] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3350b520, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 20:11:05,091 DEBUG [RS:2;jenkins-hbase4:43869] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3aebd3a9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 20:11:05,091 DEBUG [RS:1;jenkins-hbase4:32969] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@fe1b269, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 20:11:05,091 DEBUG [RS:2;jenkins-hbase4:43869] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@329df8c4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 20:11:05,098 DEBUG [RS:0;jenkins-hbase4:43627] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:43627 2023-07-24 20:11:05,098 INFO [RS:0;jenkins-hbase4:43627] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 20:11:05,098 INFO [RS:0;jenkins-hbase4:43627] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 20:11:05,098 DEBUG [RS:0;jenkins-hbase4:43627] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 20:11:05,099 INFO [RS:0;jenkins-hbase4:43627] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34595,1690229464695 with isa=jenkins-hbase4.apache.org/172.31.14.131:43627, startcode=1690229464751 2023-07-24 20:11:05,099 DEBUG [RS:0;jenkins-hbase4:43627] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 20:11:05,099 DEBUG [RS:1;jenkins-hbase4:32969] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:32969 2023-07-24 20:11:05,099 INFO [RS:1;jenkins-hbase4:32969] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 20:11:05,099 INFO [RS:1;jenkins-hbase4:32969] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 20:11:05,099 DEBUG [RS:1;jenkins-hbase4:32969] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 20:11:05,100 INFO [RS:1;jenkins-hbase4:32969] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34595,1690229464695 with isa=jenkins-hbase4.apache.org/172.31.14.131:32969, startcode=1690229464792 2023-07-24 20:11:05,100 DEBUG [RS:1;jenkins-hbase4:32969] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 20:11:05,101 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55433, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 20:11:05,102 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34595] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43627,1690229464751 2023-07-24 20:11:05,103 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34595,1690229464695] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 20:11:05,103 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33647, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 20:11:05,103 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34595,1690229464695] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-24 20:11:05,103 DEBUG [RS:0;jenkins-hbase4:43627] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0 2023-07-24 20:11:05,103 DEBUG [RS:0;jenkins-hbase4:43627] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39673 2023-07-24 20:11:05,103 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34595] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,32969,1690229464792 2023-07-24 20:11:05,103 DEBUG [RS:0;jenkins-hbase4:43627] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41735 2023-07-24 20:11:05,104 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34595,1690229464695] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 20:11:05,104 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34595,1690229464695] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-24 20:11:05,104 DEBUG [RS:1;jenkins-hbase4:32969] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0 2023-07-24 20:11:05,104 DEBUG [RS:1;jenkins-hbase4:32969] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39673 2023-07-24 20:11:05,104 DEBUG [RS:1;jenkins-hbase4:32969] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41735 2023-07-24 20:11:05,104 DEBUG [RS:2;jenkins-hbase4:43869] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:43869 2023-07-24 20:11:05,104 INFO [RS:2;jenkins-hbase4:43869] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 20:11:05,104 INFO [RS:2;jenkins-hbase4:43869] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 20:11:05,104 DEBUG [RS:2;jenkins-hbase4:43869] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 20:11:05,105 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:11:05,110 DEBUG [RS:0;jenkins-hbase4:43627] zookeeper.ZKUtil(162): regionserver:43627-0x10198f561670001, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43627,1690229464751 2023-07-24 20:11:05,110 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43627,1690229464751] 2023-07-24 20:11:05,110 WARN [RS:0;jenkins-hbase4:43627] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 20:11:05,110 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,32969,1690229464792] 2023-07-24 20:11:05,110 INFO [RS:0;jenkins-hbase4:43627] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 20:11:05,110 INFO [RS:2;jenkins-hbase4:43869] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34595,1690229464695 with isa=jenkins-hbase4.apache.org/172.31.14.131:43869, startcode=1690229464828 2023-07-24 20:11:05,110 DEBUG [RS:1;jenkins-hbase4:32969] zookeeper.ZKUtil(162): regionserver:32969-0x10198f561670002, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32969,1690229464792 2023-07-24 20:11:05,110 DEBUG [RS:2;jenkins-hbase4:43869] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 20:11:05,110 DEBUG [RS:0;jenkins-hbase4:43627] regionserver.HRegionServer(1948): logDir=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/WALs/jenkins-hbase4.apache.org,43627,1690229464751 2023-07-24 20:11:05,110 WARN [RS:1;jenkins-hbase4:32969] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 20:11:05,110 INFO [RS:1;jenkins-hbase4:32969] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 20:11:05,110 DEBUG [RS:1;jenkins-hbase4:32969] regionserver.HRegionServer(1948): logDir=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/WALs/jenkins-hbase4.apache.org,32969,1690229464792 2023-07-24 20:11:05,114 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51683, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 20:11:05,115 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34595] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43869,1690229464828 2023-07-24 20:11:05,115 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34595,1690229464695] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 20:11:05,115 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34595,1690229464695] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-24 20:11:05,116 DEBUG [RS:2;jenkins-hbase4:43869] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0 2023-07-24 20:11:05,116 DEBUG [RS:2;jenkins-hbase4:43869] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39673 2023-07-24 20:11:05,116 DEBUG [RS:2;jenkins-hbase4:43869] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41735 2023-07-24 20:11:05,118 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:43627-0x10198f561670001, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:11:05,118 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:11:05,118 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:32969-0x10198f561670002, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:11:05,118 DEBUG [RS:1;jenkins-hbase4:32969] zookeeper.ZKUtil(162): regionserver:32969-0x10198f561670002, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43627,1690229464751 2023-07-24 20:11:05,118 DEBUG [RS:0;jenkins-hbase4:43627] zookeeper.ZKUtil(162): regionserver:43627-0x10198f561670001, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43627,1690229464751 2023-07-24 20:11:05,118 DEBUG [RS:2;jenkins-hbase4:43869] zookeeper.ZKUtil(162): regionserver:43869-0x10198f561670003, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43869,1690229464828 2023-07-24 20:11:05,118 WARN [RS:2;jenkins-hbase4:43869] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 20:11:05,119 INFO [RS:2;jenkins-hbase4:43869] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 20:11:05,119 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43869,1690229464828] 2023-07-24 20:11:05,119 DEBUG [RS:2;jenkins-hbase4:43869] regionserver.HRegionServer(1948): logDir=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/WALs/jenkins-hbase4.apache.org,43869,1690229464828 2023-07-24 20:11:05,119 DEBUG [RS:1;jenkins-hbase4:32969] zookeeper.ZKUtil(162): regionserver:32969-0x10198f561670002, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32969,1690229464792 2023-07-24 20:11:05,119 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43627-0x10198f561670001, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43627,1690229464751 2023-07-24 20:11:05,119 DEBUG [RS:0;jenkins-hbase4:43627] zookeeper.ZKUtil(162): regionserver:43627-0x10198f561670001, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32969,1690229464792 2023-07-24 20:11:05,119 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32969-0x10198f561670002, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43627,1690229464751 2023-07-24 20:11:05,124 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43627-0x10198f561670001, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32969,1690229464792 2023-07-24 20:11:05,124 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32969-0x10198f561670002, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32969,1690229464792 2023-07-24 20:11:05,125 DEBUG [RS:1;jenkins-hbase4:32969] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 20:11:05,125 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32969-0x10198f561670002, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43869,1690229464828 2023-07-24 20:11:05,125 INFO [RS:1;jenkins-hbase4:32969] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 20:11:05,126 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43627-0x10198f561670001, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43869,1690229464828 2023-07-24 20:11:05,126 DEBUG [RS:0;jenkins-hbase4:43627] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 20:11:05,127 DEBUG [RS:2;jenkins-hbase4:43869] zookeeper.ZKUtil(162): regionserver:43869-0x10198f561670003, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43627,1690229464751 2023-07-24 20:11:05,127 INFO [RS:0;jenkins-hbase4:43627] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 20:11:05,127 INFO [RS:1;jenkins-hbase4:32969] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 20:11:05,127 DEBUG [RS:2;jenkins-hbase4:43869] zookeeper.ZKUtil(162): regionserver:43869-0x10198f561670003, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32969,1690229464792 2023-07-24 20:11:05,128 DEBUG [RS:2;jenkins-hbase4:43869] zookeeper.ZKUtil(162): regionserver:43869-0x10198f561670003, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43869,1690229464828 2023-07-24 20:11:05,128 INFO [RS:1;jenkins-hbase4:32969] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 20:11:05,128 INFO [RS:1;jenkins-hbase4:32969] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,128 INFO [RS:1;jenkins-hbase4:32969] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 20:11:05,129 INFO [RS:0;jenkins-hbase4:43627] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 20:11:05,130 INFO [RS:1;jenkins-hbase4:32969] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,131 INFO [RS:0;jenkins-hbase4:43627] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 20:11:05,131 DEBUG [RS:1;jenkins-hbase4:32969] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,131 INFO [RS:0;jenkins-hbase4:43627] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,132 DEBUG [RS:1;jenkins-hbase4:32969] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,132 DEBUG [RS:1;jenkins-hbase4:32969] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,132 INFO [RS:0;jenkins-hbase4:43627] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 20:11:05,132 DEBUG [RS:1;jenkins-hbase4:32969] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,133 DEBUG [RS:1;jenkins-hbase4:32969] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,133 DEBUG [RS:1;jenkins-hbase4:32969] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 20:11:05,134 DEBUG [RS:1;jenkins-hbase4:32969] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,134 DEBUG [RS:1;jenkins-hbase4:32969] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,134 DEBUG [RS:1;jenkins-hbase4:32969] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,134 DEBUG [RS:1;jenkins-hbase4:32969] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,135 INFO [RS:1;jenkins-hbase4:32969] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,135 INFO [RS:1;jenkins-hbase4:32969] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,136 INFO [RS:1;jenkins-hbase4:32969] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,136 INFO [RS:0;jenkins-hbase4:43627] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,136 DEBUG [RS:0;jenkins-hbase4:43627] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,136 DEBUG [RS:0;jenkins-hbase4:43627] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,137 DEBUG [RS:0;jenkins-hbase4:43627] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,137 DEBUG [RS:0;jenkins-hbase4:43627] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,137 DEBUG [RS:0;jenkins-hbase4:43627] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,137 DEBUG [RS:0;jenkins-hbase4:43627] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 20:11:05,137 DEBUG [RS:0;jenkins-hbase4:43627] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,137 DEBUG [RS:0;jenkins-hbase4:43627] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,137 DEBUG [RS:0;jenkins-hbase4:43627] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,137 DEBUG [RS:0;jenkins-hbase4:43627] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,141 DEBUG [RS:2;jenkins-hbase4:43869] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 20:11:05,141 INFO [RS:2;jenkins-hbase4:43869] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 20:11:05,146 INFO [RS:0;jenkins-hbase4:43627] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,148 INFO [RS:2;jenkins-hbase4:43869] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 20:11:05,148 INFO [RS:0;jenkins-hbase4:43627] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,148 INFO [RS:0;jenkins-hbase4:43627] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,149 INFO [RS:2;jenkins-hbase4:43869] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 20:11:05,149 INFO [RS:2;jenkins-hbase4:43869] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,151 INFO [RS:2;jenkins-hbase4:43869] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 20:11:05,153 INFO [RS:2;jenkins-hbase4:43869] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,153 DEBUG [RS:2;jenkins-hbase4:43869] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,153 DEBUG [RS:2;jenkins-hbase4:43869] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,153 DEBUG [RS:2;jenkins-hbase4:43869] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,153 DEBUG [RS:2;jenkins-hbase4:43869] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,153 DEBUG [RS:2;jenkins-hbase4:43869] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,153 DEBUG [RS:2;jenkins-hbase4:43869] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 20:11:05,153 DEBUG [RS:2;jenkins-hbase4:43869] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,153 DEBUG [RS:2;jenkins-hbase4:43869] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,153 INFO [RS:1;jenkins-hbase4:32969] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 20:11:05,153 DEBUG [RS:2;jenkins-hbase4:43869] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,154 INFO [RS:1;jenkins-hbase4:32969] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,32969,1690229464792-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,154 DEBUG [RS:2;jenkins-hbase4:43869] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:05,155 INFO [RS:2;jenkins-hbase4:43869] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,155 INFO [RS:2;jenkins-hbase4:43869] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,155 INFO [RS:2;jenkins-hbase4:43869] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,168 INFO [RS:0;jenkins-hbase4:43627] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 20:11:05,168 INFO [RS:0;jenkins-hbase4:43627] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43627,1690229464751-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,175 INFO [RS:1;jenkins-hbase4:32969] regionserver.Replication(203): jenkins-hbase4.apache.org,32969,1690229464792 started 2023-07-24 20:11:05,175 INFO [RS:1;jenkins-hbase4:32969] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,32969,1690229464792, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:32969, sessionid=0x10198f561670002 2023-07-24 20:11:05,178 DEBUG [RS:1;jenkins-hbase4:32969] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 20:11:05,178 DEBUG [RS:1;jenkins-hbase4:32969] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,32969,1690229464792 2023-07-24 20:11:05,178 DEBUG [RS:1;jenkins-hbase4:32969] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,32969,1690229464792' 2023-07-24 20:11:05,178 DEBUG [RS:1;jenkins-hbase4:32969] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 20:11:05,179 DEBUG [RS:1;jenkins-hbase4:32969] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 20:11:05,180 DEBUG [RS:1;jenkins-hbase4:32969] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 20:11:05,180 DEBUG [RS:1;jenkins-hbase4:32969] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 20:11:05,180 DEBUG [RS:1;jenkins-hbase4:32969] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,32969,1690229464792 2023-07-24 20:11:05,180 DEBUG [RS:1;jenkins-hbase4:32969] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,32969,1690229464792' 2023-07-24 20:11:05,180 DEBUG [RS:1;jenkins-hbase4:32969] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 20:11:05,180 DEBUG [RS:1;jenkins-hbase4:32969] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 20:11:05,181 DEBUG [RS:1;jenkins-hbase4:32969] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 20:11:05,181 INFO [RS:1;jenkins-hbase4:32969] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 20:11:05,181 INFO [RS:1;jenkins-hbase4:32969] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 20:11:05,186 INFO [RS:2;jenkins-hbase4:43869] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 20:11:05,186 INFO [RS:2;jenkins-hbase4:43869] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43869,1690229464828-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,187 INFO [RS:0;jenkins-hbase4:43627] regionserver.Replication(203): jenkins-hbase4.apache.org,43627,1690229464751 started 2023-07-24 20:11:05,187 INFO [RS:0;jenkins-hbase4:43627] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43627,1690229464751, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43627, sessionid=0x10198f561670001 2023-07-24 20:11:05,187 DEBUG [RS:0;jenkins-hbase4:43627] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 20:11:05,187 DEBUG [RS:0;jenkins-hbase4:43627] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43627,1690229464751 2023-07-24 20:11:05,187 DEBUG [RS:0;jenkins-hbase4:43627] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43627,1690229464751' 2023-07-24 20:11:05,187 DEBUG [RS:0;jenkins-hbase4:43627] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 20:11:05,188 DEBUG [RS:0;jenkins-hbase4:43627] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 20:11:05,188 DEBUG [RS:0;jenkins-hbase4:43627] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 20:11:05,188 DEBUG [RS:0;jenkins-hbase4:43627] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 20:11:05,188 DEBUG [RS:0;jenkins-hbase4:43627] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43627,1690229464751 2023-07-24 20:11:05,188 DEBUG [RS:0;jenkins-hbase4:43627] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43627,1690229464751' 2023-07-24 20:11:05,188 DEBUG [RS:0;jenkins-hbase4:43627] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 20:11:05,189 DEBUG [RS:0;jenkins-hbase4:43627] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 20:11:05,189 DEBUG [RS:0;jenkins-hbase4:43627] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 20:11:05,189 INFO [RS:0;jenkins-hbase4:43627] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 20:11:05,189 INFO [RS:0;jenkins-hbase4:43627] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 20:11:05,198 INFO [RS:2;jenkins-hbase4:43869] regionserver.Replication(203): jenkins-hbase4.apache.org,43869,1690229464828 started 2023-07-24 20:11:05,198 INFO [RS:2;jenkins-hbase4:43869] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43869,1690229464828, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43869, sessionid=0x10198f561670003 2023-07-24 20:11:05,199 DEBUG [RS:2;jenkins-hbase4:43869] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 20:11:05,199 DEBUG [RS:2;jenkins-hbase4:43869] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43869,1690229464828 2023-07-24 20:11:05,199 DEBUG [RS:2;jenkins-hbase4:43869] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43869,1690229464828' 2023-07-24 20:11:05,199 DEBUG [RS:2;jenkins-hbase4:43869] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 20:11:05,199 DEBUG [RS:2;jenkins-hbase4:43869] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 20:11:05,199 DEBUG [RS:2;jenkins-hbase4:43869] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 20:11:05,199 DEBUG [RS:2;jenkins-hbase4:43869] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 20:11:05,199 DEBUG [RS:2;jenkins-hbase4:43869] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43869,1690229464828 2023-07-24 20:11:05,199 DEBUG [RS:2;jenkins-hbase4:43869] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43869,1690229464828' 2023-07-24 20:11:05,199 DEBUG [RS:2;jenkins-hbase4:43869] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 20:11:05,200 DEBUG [RS:2;jenkins-hbase4:43869] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 20:11:05,200 DEBUG [RS:2;jenkins-hbase4:43869] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 20:11:05,200 INFO [RS:2;jenkins-hbase4:43869] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 20:11:05,200 INFO [RS:2;jenkins-hbase4:43869] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 20:11:05,238 DEBUG [jenkins-hbase4:34595] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 20:11:05,239 DEBUG [jenkins-hbase4:34595] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 20:11:05,239 DEBUG [jenkins-hbase4:34595] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 20:11:05,239 DEBUG [jenkins-hbase4:34595] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 20:11:05,239 DEBUG [jenkins-hbase4:34595] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 20:11:05,239 DEBUG [jenkins-hbase4:34595] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 20:11:05,240 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,32969,1690229464792, state=OPENING 2023-07-24 20:11:05,241 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-24 20:11:05,243 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:11:05,243 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,32969,1690229464792}] 2023-07-24 20:11:05,243 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 20:11:05,283 INFO [RS:1;jenkins-hbase4:32969] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C32969%2C1690229464792, suffix=, logDir=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/WALs/jenkins-hbase4.apache.org,32969,1690229464792, archiveDir=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/oldWALs, maxLogs=32 2023-07-24 20:11:05,291 INFO [RS:0;jenkins-hbase4:43627] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43627%2C1690229464751, suffix=, logDir=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/WALs/jenkins-hbase4.apache.org,43627,1690229464751, archiveDir=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/oldWALs, maxLogs=32 2023-07-24 20:11:05,302 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33623,DS-d7ddee52-5373-4544-a776-a93e442a51de,DISK] 2023-07-24 20:11:05,302 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37057,DS-a1dc3f7c-998a-44b7-baa0-f422c70b9d26,DISK] 2023-07-24 20:11:05,302 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36505,DS-b414c051-d105-46ff-83bd-6b34e4db37d0,DISK] 2023-07-24 20:11:05,304 INFO [RS:2;jenkins-hbase4:43869] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43869%2C1690229464828, suffix=, logDir=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/WALs/jenkins-hbase4.apache.org,43869,1690229464828, archiveDir=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/oldWALs, maxLogs=32 2023-07-24 20:11:05,308 INFO [RS:1;jenkins-hbase4:32969] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/WALs/jenkins-hbase4.apache.org,32969,1690229464792/jenkins-hbase4.apache.org%2C32969%2C1690229464792.1690229465283 2023-07-24 20:11:05,312 DEBUG [RS:1;jenkins-hbase4:32969] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33623,DS-d7ddee52-5373-4544-a776-a93e442a51de,DISK], DatanodeInfoWithStorage[127.0.0.1:36505,DS-b414c051-d105-46ff-83bd-6b34e4db37d0,DISK], DatanodeInfoWithStorage[127.0.0.1:37057,DS-a1dc3f7c-998a-44b7-baa0-f422c70b9d26,DISK]] 2023-07-24 20:11:05,318 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37057,DS-a1dc3f7c-998a-44b7-baa0-f422c70b9d26,DISK] 2023-07-24 20:11:05,318 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33623,DS-d7ddee52-5373-4544-a776-a93e442a51de,DISK] 2023-07-24 20:11:05,318 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36505,DS-b414c051-d105-46ff-83bd-6b34e4db37d0,DISK] 2023-07-24 20:11:05,328 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33623,DS-d7ddee52-5373-4544-a776-a93e442a51de,DISK] 2023-07-24 20:11:05,328 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37057,DS-a1dc3f7c-998a-44b7-baa0-f422c70b9d26,DISK] 2023-07-24 20:11:05,328 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36505,DS-b414c051-d105-46ff-83bd-6b34e4db37d0,DISK] 2023-07-24 20:11:05,328 INFO [RS:0;jenkins-hbase4:43627] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/WALs/jenkins-hbase4.apache.org,43627,1690229464751/jenkins-hbase4.apache.org%2C43627%2C1690229464751.1690229465292 2023-07-24 20:11:05,329 WARN [ReadOnlyZKClient-127.0.0.1:60607@0x43ff456d] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-24 20:11:05,329 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34595,1690229464695] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 20:11:05,330 DEBUG [RS:0;jenkins-hbase4:43627] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33623,DS-d7ddee52-5373-4544-a776-a93e442a51de,DISK], DatanodeInfoWithStorage[127.0.0.1:36505,DS-b414c051-d105-46ff-83bd-6b34e4db37d0,DISK], DatanodeInfoWithStorage[127.0.0.1:37057,DS-a1dc3f7c-998a-44b7-baa0-f422c70b9d26,DISK]] 2023-07-24 20:11:05,343 INFO [RS:2;jenkins-hbase4:43869] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/WALs/jenkins-hbase4.apache.org,43869,1690229464828/jenkins-hbase4.apache.org%2C43869%2C1690229464828.1690229465304 2023-07-24 20:11:05,343 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51792, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 20:11:05,344 DEBUG [RS:2;jenkins-hbase4:43869] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33623,DS-d7ddee52-5373-4544-a776-a93e442a51de,DISK], DatanodeInfoWithStorage[127.0.0.1:37057,DS-a1dc3f7c-998a-44b7-baa0-f422c70b9d26,DISK], DatanodeInfoWithStorage[127.0.0.1:36505,DS-b414c051-d105-46ff-83bd-6b34e4db37d0,DISK]] 2023-07-24 20:11:05,344 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=32969] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:51792 deadline: 1690229525344, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,32969,1690229464792 2023-07-24 20:11:05,398 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,32969,1690229464792 2023-07-24 20:11:05,399 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 20:11:05,401 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51808, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 20:11:05,405 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-24 20:11:05,405 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 20:11:05,407 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C32969%2C1690229464792.meta, suffix=.meta, logDir=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/WALs/jenkins-hbase4.apache.org,32969,1690229464792, archiveDir=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/oldWALs, maxLogs=32 2023-07-24 20:11:05,424 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36505,DS-b414c051-d105-46ff-83bd-6b34e4db37d0,DISK] 2023-07-24 20:11:05,425 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33623,DS-d7ddee52-5373-4544-a776-a93e442a51de,DISK] 2023-07-24 20:11:05,425 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37057,DS-a1dc3f7c-998a-44b7-baa0-f422c70b9d26,DISK] 2023-07-24 20:11:05,430 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/WALs/jenkins-hbase4.apache.org,32969,1690229464792/jenkins-hbase4.apache.org%2C32969%2C1690229464792.meta.1690229465407.meta 2023-07-24 20:11:05,430 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36505,DS-b414c051-d105-46ff-83bd-6b34e4db37d0,DISK], DatanodeInfoWithStorage[127.0.0.1:37057,DS-a1dc3f7c-998a-44b7-baa0-f422c70b9d26,DISK], DatanodeInfoWithStorage[127.0.0.1:33623,DS-d7ddee52-5373-4544-a776-a93e442a51de,DISK]] 2023-07-24 20:11:05,431 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:11:05,431 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 20:11:05,431 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-24 20:11:05,431 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-24 20:11:05,431 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-24 20:11:05,431 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:11:05,431 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-24 20:11:05,431 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-24 20:11:05,434 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 20:11:05,443 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740/info 2023-07-24 20:11:05,443 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740/info 2023-07-24 20:11:05,444 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 20:11:05,444 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:11:05,444 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 20:11:05,446 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740/rep_barrier 2023-07-24 20:11:05,446 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740/rep_barrier 2023-07-24 20:11:05,446 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 20:11:05,447 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:11:05,447 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 20:11:05,448 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740/table 2023-07-24 20:11:05,448 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740/table 2023-07-24 20:11:05,449 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 20:11:05,449 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:11:05,450 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740 2023-07-24 20:11:05,452 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740 2023-07-24 20:11:05,456 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 20:11:05,458 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 20:11:05,460 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9654944000, jitterRate=-0.10081326961517334}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 20:11:05,460 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 20:11:05,461 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690229465397 2023-07-24 20:11:05,470 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-24 20:11:05,471 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-24 20:11:05,472 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,32969,1690229464792, state=OPEN 2023-07-24 20:11:05,475 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 20:11:05,475 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 20:11:05,477 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-24 20:11:05,477 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,32969,1690229464792 in 232 msec 2023-07-24 20:11:05,480 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-24 20:11:05,480 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 392 msec 2023-07-24 20:11:05,482 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 459 msec 2023-07-24 20:11:05,482 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690229465482, completionTime=-1 2023-07-24 20:11:05,483 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-24 20:11:05,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-24 20:11:05,488 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-24 20:11:05,488 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690229525488 2023-07-24 20:11:05,488 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690229585488 2023-07-24 20:11:05,488 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-07-24 20:11:05,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34595,1690229464695-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34595,1690229464695-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34595,1690229464695-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:34595, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:05,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-24 20:11:05,499 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 20:11:05,500 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-24 20:11:05,501 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-24 20:11:05,501 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 20:11:05,502 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 20:11:05,504 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/.tmp/data/hbase/namespace/473d7277b89064b213c46acf5599c6bb 2023-07-24 20:11:05,504 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/.tmp/data/hbase/namespace/473d7277b89064b213c46acf5599c6bb empty. 2023-07-24 20:11:05,505 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/.tmp/data/hbase/namespace/473d7277b89064b213c46acf5599c6bb 2023-07-24 20:11:05,505 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-24 20:11:05,565 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-24 20:11:05,566 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 473d7277b89064b213c46acf5599c6bb, NAME => 'hbase:namespace,,1690229465498.473d7277b89064b213c46acf5599c6bb.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/.tmp 2023-07-24 20:11:05,579 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690229465498.473d7277b89064b213c46acf5599c6bb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:11:05,579 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 473d7277b89064b213c46acf5599c6bb, disabling compactions & flushes 2023-07-24 20:11:05,579 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690229465498.473d7277b89064b213c46acf5599c6bb. 2023-07-24 20:11:05,579 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690229465498.473d7277b89064b213c46acf5599c6bb. 2023-07-24 20:11:05,579 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690229465498.473d7277b89064b213c46acf5599c6bb. after waiting 0 ms 2023-07-24 20:11:05,579 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690229465498.473d7277b89064b213c46acf5599c6bb. 2023-07-24 20:11:05,579 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690229465498.473d7277b89064b213c46acf5599c6bb. 2023-07-24 20:11:05,579 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 473d7277b89064b213c46acf5599c6bb: 2023-07-24 20:11:05,582 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 20:11:05,583 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690229465498.473d7277b89064b213c46acf5599c6bb.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690229465583"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229465583"}]},"ts":"1690229465583"} 2023-07-24 20:11:05,586 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 20:11:05,586 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 20:11:05,587 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229465587"}]},"ts":"1690229465587"} 2023-07-24 20:11:05,588 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-24 20:11:05,591 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 20:11:05,591 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 20:11:05,592 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 20:11:05,592 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 20:11:05,592 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 20:11:05,592 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=473d7277b89064b213c46acf5599c6bb, ASSIGN}] 2023-07-24 20:11:05,594 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=473d7277b89064b213c46acf5599c6bb, ASSIGN 2023-07-24 20:11:05,596 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=473d7277b89064b213c46acf5599c6bb, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43869,1690229464828; forceNewPlan=false, retain=false 2023-07-24 20:11:05,648 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34595,1690229464695] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 20:11:05,650 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34595,1690229464695] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-24 20:11:05,651 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 20:11:05,652 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 20:11:05,653 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/.tmp/data/hbase/rsgroup/8748e89fab51a048dc1724dc83ecc654 2023-07-24 20:11:05,654 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/.tmp/data/hbase/rsgroup/8748e89fab51a048dc1724dc83ecc654 empty. 2023-07-24 20:11:05,655 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/.tmp/data/hbase/rsgroup/8748e89fab51a048dc1724dc83ecc654 2023-07-24 20:11:05,655 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-24 20:11:05,670 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-24 20:11:05,671 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8748e89fab51a048dc1724dc83ecc654, NAME => 'hbase:rsgroup,,1690229465648.8748e89fab51a048dc1724dc83ecc654.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/.tmp 2023-07-24 20:11:05,686 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690229465648.8748e89fab51a048dc1724dc83ecc654.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:11:05,686 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 8748e89fab51a048dc1724dc83ecc654, disabling compactions & flushes 2023-07-24 20:11:05,686 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690229465648.8748e89fab51a048dc1724dc83ecc654. 2023-07-24 20:11:05,686 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690229465648.8748e89fab51a048dc1724dc83ecc654. 2023-07-24 20:11:05,686 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690229465648.8748e89fab51a048dc1724dc83ecc654. after waiting 0 ms 2023-07-24 20:11:05,686 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690229465648.8748e89fab51a048dc1724dc83ecc654. 2023-07-24 20:11:05,686 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690229465648.8748e89fab51a048dc1724dc83ecc654. 2023-07-24 20:11:05,686 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 8748e89fab51a048dc1724dc83ecc654: 2023-07-24 20:11:05,688 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 20:11:05,689 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690229465648.8748e89fab51a048dc1724dc83ecc654.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690229465689"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229465689"}]},"ts":"1690229465689"} 2023-07-24 20:11:05,691 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 20:11:05,691 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 20:11:05,692 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229465691"}]},"ts":"1690229465691"} 2023-07-24 20:11:05,692 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-24 20:11:05,697 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 20:11:05,697 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 20:11:05,697 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 20:11:05,697 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 20:11:05,697 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 20:11:05,697 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=8748e89fab51a048dc1724dc83ecc654, ASSIGN}] 2023-07-24 20:11:05,698 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=8748e89fab51a048dc1724dc83ecc654, ASSIGN 2023-07-24 20:11:05,699 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=8748e89fab51a048dc1724dc83ecc654, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,32969,1690229464792; forceNewPlan=false, retain=false 2023-07-24 20:11:05,699 INFO [jenkins-hbase4:34595] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-24 20:11:05,701 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=473d7277b89064b213c46acf5599c6bb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43869,1690229464828 2023-07-24 20:11:05,701 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690229465498.473d7277b89064b213c46acf5599c6bb.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690229465701"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229465701"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229465701"}]},"ts":"1690229465701"} 2023-07-24 20:11:05,701 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=8748e89fab51a048dc1724dc83ecc654, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32969,1690229464792 2023-07-24 20:11:05,701 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690229465648.8748e89fab51a048dc1724dc83ecc654.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690229465701"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229465701"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229465701"}]},"ts":"1690229465701"} 2023-07-24 20:11:05,702 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure 473d7277b89064b213c46acf5599c6bb, server=jenkins-hbase4.apache.org,43869,1690229464828}] 2023-07-24 20:11:05,703 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 8748e89fab51a048dc1724dc83ecc654, server=jenkins-hbase4.apache.org,32969,1690229464792}] 2023-07-24 20:11:05,855 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43869,1690229464828 2023-07-24 20:11:05,855 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 20:11:05,856 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36504, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 20:11:05,858 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690229465648.8748e89fab51a048dc1724dc83ecc654. 2023-07-24 20:11:05,858 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8748e89fab51a048dc1724dc83ecc654, NAME => 'hbase:rsgroup,,1690229465648.8748e89fab51a048dc1724dc83ecc654.', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:11:05,859 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 20:11:05,859 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690229465648.8748e89fab51a048dc1724dc83ecc654. service=MultiRowMutationService 2023-07-24 20:11:05,859 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-24 20:11:05,859 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 8748e89fab51a048dc1724dc83ecc654 2023-07-24 20:11:05,859 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690229465648.8748e89fab51a048dc1724dc83ecc654.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:11:05,859 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8748e89fab51a048dc1724dc83ecc654 2023-07-24 20:11:05,859 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8748e89fab51a048dc1724dc83ecc654 2023-07-24 20:11:05,860 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690229465498.473d7277b89064b213c46acf5599c6bb. 2023-07-24 20:11:05,860 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 473d7277b89064b213c46acf5599c6bb, NAME => 'hbase:namespace,,1690229465498.473d7277b89064b213c46acf5599c6bb.', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:11:05,860 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 473d7277b89064b213c46acf5599c6bb 2023-07-24 20:11:05,860 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690229465498.473d7277b89064b213c46acf5599c6bb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:11:05,860 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 473d7277b89064b213c46acf5599c6bb 2023-07-24 20:11:05,860 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 473d7277b89064b213c46acf5599c6bb 2023-07-24 20:11:05,861 INFO [StoreOpener-8748e89fab51a048dc1724dc83ecc654-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 8748e89fab51a048dc1724dc83ecc654 2023-07-24 20:11:05,861 INFO [StoreOpener-473d7277b89064b213c46acf5599c6bb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 473d7277b89064b213c46acf5599c6bb 2023-07-24 20:11:05,862 DEBUG [StoreOpener-8748e89fab51a048dc1724dc83ecc654-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/rsgroup/8748e89fab51a048dc1724dc83ecc654/m 2023-07-24 20:11:05,862 DEBUG [StoreOpener-8748e89fab51a048dc1724dc83ecc654-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/rsgroup/8748e89fab51a048dc1724dc83ecc654/m 2023-07-24 20:11:05,862 INFO [StoreOpener-8748e89fab51a048dc1724dc83ecc654-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8748e89fab51a048dc1724dc83ecc654 columnFamilyName m 2023-07-24 20:11:05,862 DEBUG [StoreOpener-473d7277b89064b213c46acf5599c6bb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/namespace/473d7277b89064b213c46acf5599c6bb/info 2023-07-24 20:11:05,863 DEBUG [StoreOpener-473d7277b89064b213c46acf5599c6bb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/namespace/473d7277b89064b213c46acf5599c6bb/info 2023-07-24 20:11:05,863 INFO [StoreOpener-8748e89fab51a048dc1724dc83ecc654-1] regionserver.HStore(310): Store=8748e89fab51a048dc1724dc83ecc654/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:11:05,863 INFO [StoreOpener-473d7277b89064b213c46acf5599c6bb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 473d7277b89064b213c46acf5599c6bb columnFamilyName info 2023-07-24 20:11:05,864 INFO [StoreOpener-473d7277b89064b213c46acf5599c6bb-1] regionserver.HStore(310): Store=473d7277b89064b213c46acf5599c6bb/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:11:05,864 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/rsgroup/8748e89fab51a048dc1724dc83ecc654 2023-07-24 20:11:05,864 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/rsgroup/8748e89fab51a048dc1724dc83ecc654 2023-07-24 20:11:05,864 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/namespace/473d7277b89064b213c46acf5599c6bb 2023-07-24 20:11:05,865 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/namespace/473d7277b89064b213c46acf5599c6bb 2023-07-24 20:11:05,867 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8748e89fab51a048dc1724dc83ecc654 2023-07-24 20:11:05,868 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 473d7277b89064b213c46acf5599c6bb 2023-07-24 20:11:05,870 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/rsgroup/8748e89fab51a048dc1724dc83ecc654/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:11:05,871 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8748e89fab51a048dc1724dc83ecc654; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@6f417d30, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:11:05,871 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/namespace/473d7277b89064b213c46acf5599c6bb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:11:05,871 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8748e89fab51a048dc1724dc83ecc654: 2023-07-24 20:11:05,872 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 473d7277b89064b213c46acf5599c6bb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10955677280, jitterRate=0.02032695710659027}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:11:05,872 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 473d7277b89064b213c46acf5599c6bb: 2023-07-24 20:11:05,872 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690229465648.8748e89fab51a048dc1724dc83ecc654., pid=9, masterSystemTime=1690229465855 2023-07-24 20:11:05,873 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690229465498.473d7277b89064b213c46acf5599c6bb., pid=8, masterSystemTime=1690229465855 2023-07-24 20:11:05,876 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690229465648.8748e89fab51a048dc1724dc83ecc654. 2023-07-24 20:11:05,876 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690229465648.8748e89fab51a048dc1724dc83ecc654. 2023-07-24 20:11:05,877 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=8748e89fab51a048dc1724dc83ecc654, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,32969,1690229464792 2023-07-24 20:11:05,877 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690229465648.8748e89fab51a048dc1724dc83ecc654.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690229465877"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229465877"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229465877"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229465877"}]},"ts":"1690229465877"} 2023-07-24 20:11:05,877 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690229465498.473d7277b89064b213c46acf5599c6bb. 2023-07-24 20:11:05,878 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690229465498.473d7277b89064b213c46acf5599c6bb. 2023-07-24 20:11:05,878 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=473d7277b89064b213c46acf5599c6bb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43869,1690229464828 2023-07-24 20:11:05,878 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690229465498.473d7277b89064b213c46acf5599c6bb.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690229465878"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229465878"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229465878"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229465878"}]},"ts":"1690229465878"} 2023-07-24 20:11:05,880 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-24 20:11:05,880 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 8748e89fab51a048dc1724dc83ecc654, server=jenkins-hbase4.apache.org,32969,1690229464792 in 176 msec 2023-07-24 20:11:05,881 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-24 20:11:05,882 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure 473d7277b89064b213c46acf5599c6bb, server=jenkins-hbase4.apache.org,43869,1690229464828 in 178 msec 2023-07-24 20:11:05,882 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-24 20:11:05,882 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=8748e89fab51a048dc1724dc83ecc654, ASSIGN in 183 msec 2023-07-24 20:11:05,883 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 20:11:05,883 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229465883"}]},"ts":"1690229465883"} 2023-07-24 20:11:05,884 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-24 20:11:05,884 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=473d7277b89064b213c46acf5599c6bb, ASSIGN in 289 msec 2023-07-24 20:11:05,884 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-24 20:11:05,885 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 20:11:05,885 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229465885"}]},"ts":"1690229465885"} 2023-07-24 20:11:05,886 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-24 20:11:05,886 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 20:11:05,888 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 239 msec 2023-07-24 20:11:05,888 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 20:11:05,889 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 389 msec 2023-07-24 20:11:05,901 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-24 20:11:05,903 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-24 20:11:05,903 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:11:05,906 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 20:11:05,907 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36506, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 20:11:05,909 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-24 20:11:05,917 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 20:11:05,919 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-07-24 20:11:05,930 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-24 20:11:05,936 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 20:11:05,940 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-07-24 20:11:05,944 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-24 20:11:05,948 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-24 20:11:05,948 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.075sec 2023-07-24 20:11:05,948 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-24 20:11:05,948 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-24 20:11:05,948 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-24 20:11:05,948 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34595,1690229464695-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-24 20:11:05,948 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34595,1690229464695-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-24 20:11:05,950 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-24 20:11:05,953 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34595,1690229464695] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-24 20:11:05,953 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34595,1690229464695] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-24 20:11:05,963 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:11:05,963 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34595,1690229464695] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:05,964 DEBUG [Listener at localhost/44137] zookeeper.ReadOnlyZKClient(139): Connect 0x4a74bb74 to 127.0.0.1:60607 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 20:11:05,965 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34595,1690229464695] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 20:11:05,966 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34595,1690229464695] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-24 20:11:05,993 DEBUG [Listener at localhost/44137] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4782eee9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 20:11:05,996 DEBUG [hconnection-0x5097ffc5-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 20:11:05,998 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51824, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 20:11:06,001 INFO [Listener at localhost/44137] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,34595,1690229464695 2023-07-24 20:11:06,001 INFO [Listener at localhost/44137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:11:06,005 DEBUG [Listener at localhost/44137] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-24 20:11:06,007 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59534, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-24 20:11:06,011 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-24 20:11:06,011 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:11:06,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-24 20:11:06,012 DEBUG [Listener at localhost/44137] zookeeper.ReadOnlyZKClient(139): Connect 0x02b37d5d to 127.0.0.1:60607 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 20:11:06,025 DEBUG [Listener at localhost/44137] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6d35c319, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 20:11:06,026 INFO [Listener at localhost/44137] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:60607 2023-07-24 20:11:06,036 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 20:11:06,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:06,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:06,042 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10198f56167000a connected 2023-07-24 20:11:06,045 INFO [Listener at localhost/44137] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-24 20:11:06,059 INFO [Listener at localhost/44137] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 20:11:06,059 INFO [Listener at localhost/44137] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:11:06,059 INFO [Listener at localhost/44137] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 20:11:06,059 INFO [Listener at localhost/44137] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 20:11:06,059 INFO [Listener at localhost/44137] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 20:11:06,059 INFO [Listener at localhost/44137] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 20:11:06,060 INFO [Listener at localhost/44137] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 20:11:06,065 INFO [Listener at localhost/44137] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44007 2023-07-24 20:11:06,065 INFO [Listener at localhost/44137] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 20:11:06,070 DEBUG [Listener at localhost/44137] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 20:11:06,071 INFO [Listener at localhost/44137] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:11:06,072 INFO [Listener at localhost/44137] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 20:11:06,073 INFO [Listener at localhost/44137] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44007 connecting to ZooKeeper ensemble=127.0.0.1:60607 2023-07-24 20:11:06,078 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:440070x0, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 20:11:06,080 DEBUG [Listener at localhost/44137] zookeeper.ZKUtil(162): regionserver:440070x0, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 20:11:06,080 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44007-0x10198f56167000b connected 2023-07-24 20:11:06,081 DEBUG [Listener at localhost/44137] zookeeper.ZKUtil(162): regionserver:44007-0x10198f56167000b, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-24 20:11:06,082 DEBUG [Listener at localhost/44137] zookeeper.ZKUtil(164): regionserver:44007-0x10198f56167000b, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 20:11:06,083 DEBUG [Listener at localhost/44137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44007 2023-07-24 20:11:06,084 DEBUG [Listener at localhost/44137] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44007 2023-07-24 20:11:06,084 DEBUG [Listener at localhost/44137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44007 2023-07-24 20:11:06,085 DEBUG [Listener at localhost/44137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44007 2023-07-24 20:11:06,085 DEBUG [Listener at localhost/44137] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44007 2023-07-24 20:11:06,087 INFO [Listener at localhost/44137] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 20:11:06,087 INFO [Listener at localhost/44137] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 20:11:06,087 INFO [Listener at localhost/44137] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 20:11:06,087 INFO [Listener at localhost/44137] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 20:11:06,087 INFO [Listener at localhost/44137] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 20:11:06,088 INFO [Listener at localhost/44137] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 20:11:06,088 INFO [Listener at localhost/44137] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 20:11:06,088 INFO [Listener at localhost/44137] http.HttpServer(1146): Jetty bound to port 45621 2023-07-24 20:11:06,088 INFO [Listener at localhost/44137] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 20:11:06,089 INFO [Listener at localhost/44137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:11:06,089 INFO [Listener at localhost/44137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@27cf6cf2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/hadoop.log.dir/,AVAILABLE} 2023-07-24 20:11:06,090 INFO [Listener at localhost/44137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:11:06,090 INFO [Listener at localhost/44137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@209bd7d3{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 20:11:06,095 INFO [Listener at localhost/44137] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 20:11:06,095 INFO [Listener at localhost/44137] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 20:11:06,095 INFO [Listener at localhost/44137] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 20:11:06,096 INFO [Listener at localhost/44137] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 20:11:06,096 INFO [Listener at localhost/44137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 20:11:06,097 INFO [Listener at localhost/44137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@76830b95{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 20:11:06,098 INFO [Listener at localhost/44137] server.AbstractConnector(333): Started ServerConnector@213b3a83{HTTP/1.1, (http/1.1)}{0.0.0.0:45621} 2023-07-24 20:11:06,098 INFO [Listener at localhost/44137] server.Server(415): Started @42052ms 2023-07-24 20:11:06,100 INFO [RS:3;jenkins-hbase4:44007] regionserver.HRegionServer(951): ClusterId : 77f9e1c8-839e-4654-99af-bf7f9a811793 2023-07-24 20:11:06,101 DEBUG [RS:3;jenkins-hbase4:44007] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 20:11:06,102 DEBUG [RS:3;jenkins-hbase4:44007] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 20:11:06,102 DEBUG [RS:3;jenkins-hbase4:44007] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 20:11:06,104 DEBUG [RS:3;jenkins-hbase4:44007] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 20:11:06,107 DEBUG [RS:3;jenkins-hbase4:44007] zookeeper.ReadOnlyZKClient(139): Connect 0x229585af to 127.0.0.1:60607 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 20:11:06,112 DEBUG [RS:3;jenkins-hbase4:44007] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2c526f8d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 20:11:06,112 DEBUG [RS:3;jenkins-hbase4:44007] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@286deedf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 20:11:06,121 DEBUG [RS:3;jenkins-hbase4:44007] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:44007 2023-07-24 20:11:06,121 INFO [RS:3;jenkins-hbase4:44007] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 20:11:06,121 INFO [RS:3;jenkins-hbase4:44007] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 20:11:06,121 DEBUG [RS:3;jenkins-hbase4:44007] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 20:11:06,121 INFO [RS:3;jenkins-hbase4:44007] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34595,1690229464695 with isa=jenkins-hbase4.apache.org/172.31.14.131:44007, startcode=1690229466058 2023-07-24 20:11:06,121 DEBUG [RS:3;jenkins-hbase4:44007] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 20:11:06,123 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55151, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 20:11:06,124 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34595] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44007,1690229466058 2023-07-24 20:11:06,124 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34595,1690229464695] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 20:11:06,124 DEBUG [RS:3;jenkins-hbase4:44007] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0 2023-07-24 20:11:06,124 DEBUG [RS:3;jenkins-hbase4:44007] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39673 2023-07-24 20:11:06,124 DEBUG [RS:3;jenkins-hbase4:44007] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41735 2023-07-24 20:11:06,129 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:32969-0x10198f561670002, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:11:06,129 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:43627-0x10198f561670001, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:11:06,129 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34595,1690229464695] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:06,129 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:11:06,129 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:43869-0x10198f561670003, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:11:06,129 DEBUG [RS:3;jenkins-hbase4:44007] zookeeper.ZKUtil(162): regionserver:44007-0x10198f56167000b, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44007,1690229466058 2023-07-24 20:11:06,129 WARN [RS:3;jenkins-hbase4:44007] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 20:11:06,129 INFO [RS:3;jenkins-hbase4:44007] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 20:11:06,129 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34595,1690229464695] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 20:11:06,129 DEBUG [RS:3;jenkins-hbase4:44007] regionserver.HRegionServer(1948): logDir=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/WALs/jenkins-hbase4.apache.org,44007,1690229466058 2023-07-24 20:11:06,130 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44007,1690229466058] 2023-07-24 20:11:06,130 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32969-0x10198f561670002, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43627,1690229464751 2023-07-24 20:11:06,130 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43627-0x10198f561670001, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43627,1690229464751 2023-07-24 20:11:06,131 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43869-0x10198f561670003, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43627,1690229464751 2023-07-24 20:11:06,131 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43627-0x10198f561670001, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32969,1690229464792 2023-07-24 20:11:06,131 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34595,1690229464695] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-24 20:11:06,131 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32969-0x10198f561670002, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32969,1690229464792 2023-07-24 20:11:06,131 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43869-0x10198f561670003, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32969,1690229464792 2023-07-24 20:11:06,131 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43627-0x10198f561670001, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43869,1690229464828 2023-07-24 20:11:06,132 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32969-0x10198f561670002, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43869,1690229464828 2023-07-24 20:11:06,132 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43869-0x10198f561670003, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43869,1690229464828 2023-07-24 20:11:06,132 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32969-0x10198f561670002, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44007,1690229466058 2023-07-24 20:11:06,132 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43627-0x10198f561670001, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44007,1690229466058 2023-07-24 20:11:06,133 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43869-0x10198f561670003, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44007,1690229466058 2023-07-24 20:11:06,133 DEBUG [RS:3;jenkins-hbase4:44007] zookeeper.ZKUtil(162): regionserver:44007-0x10198f56167000b, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43627,1690229464751 2023-07-24 20:11:06,134 DEBUG [RS:3;jenkins-hbase4:44007] zookeeper.ZKUtil(162): regionserver:44007-0x10198f56167000b, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32969,1690229464792 2023-07-24 20:11:06,134 DEBUG [RS:3;jenkins-hbase4:44007] zookeeper.ZKUtil(162): regionserver:44007-0x10198f56167000b, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43869,1690229464828 2023-07-24 20:11:06,134 DEBUG [RS:3;jenkins-hbase4:44007] zookeeper.ZKUtil(162): regionserver:44007-0x10198f56167000b, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44007,1690229466058 2023-07-24 20:11:06,135 DEBUG [RS:3;jenkins-hbase4:44007] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 20:11:06,135 INFO [RS:3;jenkins-hbase4:44007] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 20:11:06,136 INFO [RS:3;jenkins-hbase4:44007] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 20:11:06,136 INFO [RS:3;jenkins-hbase4:44007] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 20:11:06,136 INFO [RS:3;jenkins-hbase4:44007] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:06,137 INFO [RS:3;jenkins-hbase4:44007] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 20:11:06,138 INFO [RS:3;jenkins-hbase4:44007] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:06,138 DEBUG [RS:3;jenkins-hbase4:44007] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:06,138 DEBUG [RS:3;jenkins-hbase4:44007] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:06,139 DEBUG [RS:3;jenkins-hbase4:44007] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:06,139 DEBUG [RS:3;jenkins-hbase4:44007] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:06,139 DEBUG [RS:3;jenkins-hbase4:44007] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:06,139 DEBUG [RS:3;jenkins-hbase4:44007] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 20:11:06,139 DEBUG [RS:3;jenkins-hbase4:44007] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:06,139 DEBUG [RS:3;jenkins-hbase4:44007] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:06,139 DEBUG [RS:3;jenkins-hbase4:44007] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:06,139 DEBUG [RS:3;jenkins-hbase4:44007] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 20:11:06,140 INFO [RS:3;jenkins-hbase4:44007] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:06,140 INFO [RS:3;jenkins-hbase4:44007] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:06,140 INFO [RS:3;jenkins-hbase4:44007] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:06,153 INFO [RS:3;jenkins-hbase4:44007] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 20:11:06,153 INFO [RS:3;jenkins-hbase4:44007] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44007,1690229466058-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 20:11:06,164 INFO [RS:3;jenkins-hbase4:44007] regionserver.Replication(203): jenkins-hbase4.apache.org,44007,1690229466058 started 2023-07-24 20:11:06,164 INFO [RS:3;jenkins-hbase4:44007] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44007,1690229466058, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44007, sessionid=0x10198f56167000b 2023-07-24 20:11:06,164 DEBUG [RS:3;jenkins-hbase4:44007] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 20:11:06,164 DEBUG [RS:3;jenkins-hbase4:44007] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44007,1690229466058 2023-07-24 20:11:06,164 DEBUG [RS:3;jenkins-hbase4:44007] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44007,1690229466058' 2023-07-24 20:11:06,164 DEBUG [RS:3;jenkins-hbase4:44007] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 20:11:06,165 DEBUG [RS:3;jenkins-hbase4:44007] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 20:11:06,165 DEBUG [RS:3;jenkins-hbase4:44007] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 20:11:06,165 DEBUG [RS:3;jenkins-hbase4:44007] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 20:11:06,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 20:11:06,165 DEBUG [RS:3;jenkins-hbase4:44007] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44007,1690229466058 2023-07-24 20:11:06,165 DEBUG [RS:3;jenkins-hbase4:44007] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44007,1690229466058' 2023-07-24 20:11:06,165 DEBUG [RS:3;jenkins-hbase4:44007] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 20:11:06,166 DEBUG [RS:3;jenkins-hbase4:44007] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 20:11:06,166 DEBUG [RS:3;jenkins-hbase4:44007] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 20:11:06,166 INFO [RS:3;jenkins-hbase4:44007] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 20:11:06,166 INFO [RS:3;jenkins-hbase4:44007] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 20:11:06,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:06,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:11:06,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:11:06,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:11:06,176 DEBUG [hconnection-0x64d60b85-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 20:11:06,178 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51832, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 20:11:06,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:06,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:06,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34595] to rsgroup master 2023-07-24 20:11:06,194 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:11:06,194 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:59534 deadline: 1690230666194, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. 2023-07-24 20:11:06,195 WARN [Listener at localhost/44137] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 20:11:06,196 INFO [Listener at localhost/44137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:11:06,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:06,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:06,198 INFO [Listener at localhost/44137] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32969, jenkins-hbase4.apache.org:43627, jenkins-hbase4.apache.org:43869, jenkins-hbase4.apache.org:44007], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 20:11:06,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:11:06,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:11:06,255 INFO [Listener at localhost/44137] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=557 (was 497) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690229465038 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60607@0x4a74bb74-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp800462433-2253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1669531361-2147 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp778270594-2239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1732526446_17 at /127.0.0.1:59886 [Receiving block BP-1801566786-172.31.14.131-1690229463922:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-6934fae8-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-661305b5-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43869 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x1d231878-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2145531783-2181 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44007 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:1;jenkins-hbase4:32969-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1801566786-172.31.14.131-1690229463922 heartbeating to localhost/127.0.0.1:39673 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34595 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1903508758) connection to localhost/127.0.0.1:39673 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43869 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/44137 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1605777713-2516 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2145531783-2180 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_972113846_17 at /127.0.0.1:55892 [Receiving block BP-1801566786-172.31.14.131-1690229463922:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@549ecd14[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 39813 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS:3;jenkins-hbase4:44007 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44007 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:3;jenkins-hbase4:44007-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/cluster_90be22ae-8281-7d6a-0dfd-0edb5b842584/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: jenkins-hbase4:44007Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=32969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 0 on default port 39813 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp800462433-2251 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: CacheReplicationMonitor(1377493577) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: Session-HouseKeeper-11c1a37b-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@61ddff1 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43869 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=32969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44007 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1605777713-2518 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=32969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:39673 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp800462433-2252 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1320865763-2212 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44007 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/cluster_90be22ae-8281-7d6a-0dfd-0edb5b842584/dfs/data/data4/current/BP-1801566786-172.31.14.131-1690229463922 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44137-SendThread(127.0.0.1:60607) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690229465038 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: pool-537-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@5224ee1e java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1801566786-172.31.14.131-1690229463922:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44137-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp778270594-2238 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44007 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60607@0x43ff456d-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Client (1903508758) connection to localhost/127.0.0.1:40043 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x1d231878-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60607@0x3e956e84-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@3fade5fe java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 2076916605@qtp-803818414-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43913 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@4e50bdb8 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56754@0x4abcb607-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1669531361-2144 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/501825316.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0-prefix:jenkins-hbase4.apache.org,43869,1690229464828 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1605777713-2513 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1669531361-2145-acceptor-0@1dba1323-ServerConnector@1902cfed{HTTP/1.1, (http/1.1)}{0.0.0.0:41735} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-12aa4d8f-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 39813 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=32969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=34595 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 2139994977@qtp-2012700175-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: qtp800462433-2248 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/501825316.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/cluster_90be22ae-8281-7d6a-0dfd-0edb5b842584/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp1669531361-2146 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-555-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2145531783-2178 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@71ee2d73 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@111c8d49 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase4:34595 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/cluster_90be22ae-8281-7d6a-0dfd-0edb5b842584/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp800462433-2246 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/501825316.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1732526446_17 at /127.0.0.1:55888 [Receiving block BP-1801566786-172.31.14.131-1690229463922:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44137.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: PacketResponder: BP-1801566786-172.31.14.131-1690229463922:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:40043 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60607@0x6c555cba-SendThread(127.0.0.1:60607) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1605777713-2514 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45907,1690229459172 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0-prefix:jenkins-hbase4.apache.org,43627,1690229464751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/cluster_90be22ae-8281-7d6a-0dfd-0edb5b842584/dfs/data/data1/current/BP-1801566786-172.31.14.131-1690229463922 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44137-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34595 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: BP-1801566786-172.31.14.131-1690229463922 heartbeating to localhost/127.0.0.1:39673 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=34595 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:39673 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:60607): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: IPC Server handler 2 on default port 44137 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ForkJoinPool-3-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: Listener at localhost/33155-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=44007 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 1815312649@qtp-2012972692-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44007 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 3 on default port 39813 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: hconnection-0x5097ffc5-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1d231878-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60607@0x6c555cba-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:40043 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:40043 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 44137 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/cluster_90be22ae-8281-7d6a-0dfd-0edb5b842584/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Server handler 1 on default port 39813 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp2145531783-2177 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60607@0x035458c9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1021474301.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@485f4e0d java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44137-SendThread(127.0.0.1:60607) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=34595 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60607@0x02b37d5d-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp778270594-2236-acceptor-0@267fc865-ServerConnector@24c740ab{HTTP/1.1, (http/1.1)}{0.0.0.0:37123} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60607@0x43ff456d-SendThread(127.0.0.1:60607) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-1801566786-172.31.14.131-1690229463922:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp778270594-2240 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60607@0x6c555cba sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1021474301.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: globalEventExecutor-1-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) io.netty.util.concurrent.GlobalEventExecutor.takeTask(GlobalEventExecutor.java:95) io.netty.util.concurrent.GlobalEventExecutor$TaskRunner.run(GlobalEventExecutor.java:239) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44137.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1428553543_17 at /127.0.0.1:55928 [Receiving block BP-1801566786-172.31.14.131-1690229463922:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 39673 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-1801566786-172.31.14.131-1690229463922:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-560-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp778270594-2237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-3-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: Listener at localhost/44137-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1936673831_17 at /127.0.0.1:55902 [Receiving block BP-1801566786-172.31.14.131-1690229463922:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60607@0x229585af-SendThread(127.0.0.1:60607) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44007 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=34595 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1801566786-172.31.14.131-1690229463922:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 297475194@qtp-1726182104-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@58752085 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1936673831_17 at /127.0.0.1:55828 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44137.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp2145531783-2179 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 39673 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=32969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1903508758) connection to localhost/127.0.0.1:40043 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1320865763-2207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44137-SendThread(127.0.0.1:60607) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp778270594-2241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-3124a64-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:43627Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@56903005[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@6e6e0585 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1320865763-2208 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1936673831_17 at /127.0.0.1:55938 [Receiving block BP-1801566786-172.31.14.131-1690229463922:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60607@0x229585af-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1428553543_17 at /127.0.0.1:59922 [Receiving block BP-1801566786-172.31.14.131-1690229463922:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataNode DiskChecker thread 1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 39673 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/cluster_90be22ae-8281-7d6a-0dfd-0edb5b842584/dfs/data/data5/current/BP-1801566786-172.31.14.131-1690229463922 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34595,1690229464695 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60607@0x43ff456d sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1021474301.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 34539 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43869 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=34595 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: BP-1801566786-172.31.14.131-1690229463922 heartbeating to localhost/127.0.0.1:39673 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1801566786-172.31.14.131-1690229463922:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_972113846_17 at /127.0.0.1:55936 [Receiving block BP-1801566786-172.31.14.131-1690229463922:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:60607 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: Listener at localhost/44137.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: jenkins-hbase4:43869Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp800462433-2247 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/501825316.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1903508758) connection to localhost/127.0.0.1:39673 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 2 on default port 34539 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1903508758) connection to localhost/127.0.0.1:39673 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 0 on default port 39673 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=32969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1801566786-172.31.14.131-1690229463922:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44137-SendThread(127.0.0.1:60607) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@73f692b6 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44007 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 4 on default port 39673 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-1801566786-172.31.14.131-1690229463922:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/cluster_90be22ae-8281-7d6a-0dfd-0edb5b842584/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43869 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 3 on default port 34539 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp2145531783-2176-acceptor-0@7f6e3106-ServerConnector@2b30ac8{HTTP/1.1, (http/1.1)}{0.0.0.0:33849} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1936673831_17 at /127.0.0.1:59938 [Receiving block BP-1801566786-172.31.14.131-1690229463922:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44137-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: hconnection-0x1d231878-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp778270594-2242 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44137-SendThread(127.0.0.1:60607) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-1801566786-172.31.14.131-1690229463922:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44137-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1320865763-2206-acceptor-0@59f63ede-ServerConnector@41c3303a{HTTP/1.1, (http/1.1)}{0.0.0.0:42945} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@5092188b[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:32969 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 39813 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43869 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x64d60b85-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44137-SendThread(127.0.0.1:60607) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 4 on default port 34539 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1669531361-2150 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1669531361-2149 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:32969Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 39673 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-1801566786-172.31.14.131-1690229463922:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@343a543b java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataNode DiskChecker thread 1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43869 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1801566786-172.31.14.131-1690229463922:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/cluster_90be22ae-8281-7d6a-0dfd-0edb5b842584/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:39673 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 660333966@qtp-2012700175-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39493 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: pool-542-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:34595 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 928228786@qtp-1726182104-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36689 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: hconnection-0x64d60b85-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0-prefix:jenkins-hbase4.apache.org,32969,1690229464792.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1d231878-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=32969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1669531361-2151 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=32969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:0;jenkins-hbase4:43627-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 34539 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/cluster_90be22ae-8281-7d6a-0dfd-0edb5b842584/dfs/data/data6/current/BP-1801566786-172.31.14.131-1690229463922 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1903508758) connection to localhost/127.0.0.1:40043 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1605777713-2515 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1320865763-2205 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/501825316.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1903508758) connection to localhost/127.0.0.1:39673 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1428553543_17 at /127.0.0.1:55904 [Receiving block BP-1801566786-172.31.14.131-1690229463922:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:43627 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60607@0x035458c9-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44007 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1801566786-172.31.14.131-1690229463922:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1320865763-2210 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1320865763-2211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43869 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60607@0x4a74bb74 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1021474301.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 44137 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS:2;jenkins-hbase4:43869-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56754@0x4abcb607 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1021474301.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 44137 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1669531361-2148 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1d231878-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1428553543_17 at /127.0.0.1:55888 [Receiving block BP-1801566786-172.31.14.131-1690229463922:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1d231878-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60607@0x035458c9-SendThread(127.0.0.1:60607) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: pool-540-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0-prefix:jenkins-hbase4.apache.org,32969,1690229464792 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1903508758) connection to localhost/127.0.0.1:40043 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp778270594-2235 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/501825316.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (1903508758) connection to localhost/127.0.0.1:39673 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1605777713-2517 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:43869 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 44137 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-1801566786-172.31.14.131-1690229463922:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:40043 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1801566786-172.31.14.131-1690229463922:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp800462433-2249 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/501825316.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp800462433-2250-acceptor-0@5bc3d24c-ServerConnector@2171fde7{HTTP/1.1, (http/1.1)}{0.0.0.0:37787} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44137-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 4 on default port 44137 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1428553543_17 at /127.0.0.1:59944 [Receiving block BP-1801566786-172.31.14.131-1690229463922:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-556-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1605777713-2511 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/501825316.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1320865763-2209 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 34539 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1732526446_17 at /127.0.0.1:55854 [Receiving block BP-1801566786-172.31.14.131-1690229463922:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-551-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1903508758) connection to localhost/127.0.0.1:39673 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Listener at localhost/33155-SendThread(127.0.0.1:56754) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34595 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60607@0x4a74bb74-SendThread(127.0.0.1:60607) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/cluster_90be22ae-8281-7d6a-0dfd-0edb5b842584/dfs/data/data3/current/BP-1801566786-172.31.14.131-1690229463922 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60607@0x229585af sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1021474301.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60607@0x3e956e84 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1021474301.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-535-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/cluster_90be22ae-8281-7d6a-0dfd-0edb5b842584/dfs/data/data2/current/BP-1801566786-172.31.14.131-1690229463922 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: LeaseRenewer:jenkins@localhost:39673 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60607@0x3e956e84-SendThread(127.0.0.1:60607) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56754@0x4abcb607-SendThread(127.0.0.1:56754) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1428553543_17 at /127.0.0.1:55944 [Receiving block BP-1801566786-172.31.14.131-1690229463922:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60607@0x02b37d5d-SendThread(127.0.0.1:60607) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: 2031105748@qtp-2012972692-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44181 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: 1519714641@qtp-803818414-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/MasterData-prefix:jenkins-hbase4.apache.org,34595,1690229464695 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1d231878-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1903508758) connection to localhost/127.0.0.1:40043 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60607@0x02b37d5d sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1021474301.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34595 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@784ddc62 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1801566786-172.31.14.131-1690229463922:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2145531783-2182 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_972113846_17 at /127.0.0.1:59928 [Receiving block BP-1801566786-172.31.14.131-1690229463922:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1605777713-2512-acceptor-0@50b80afa-ServerConnector@213b3a83{HTTP/1.1, (http/1.1)}{0.0.0.0:45621} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43869 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2145531783-2175 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/501825316.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1732526446_17 at /127.0.0.1:59856 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43869 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@54e5cad1 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=816 (was 758) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=376 (was 408), ProcessCount=176 (was 176), AvailableMemoryMB=5377 (was 5712) 2023-07-24 20:11:06,259 WARN [Listener at localhost/44137] hbase.ResourceChecker(130): Thread=557 is superior to 500 2023-07-24 20:11:06,268 INFO [RS:3;jenkins-hbase4:44007] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44007%2C1690229466058, suffix=, logDir=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/WALs/jenkins-hbase4.apache.org,44007,1690229466058, archiveDir=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/oldWALs, maxLogs=32 2023-07-24 20:11:06,282 INFO [Listener at localhost/44137] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=557, OpenFileDescriptor=816, MaxFileDescriptor=60000, SystemLoadAverage=376, ProcessCount=176, AvailableMemoryMB=5377 2023-07-24 20:11:06,283 WARN [Listener at localhost/44137] hbase.ResourceChecker(130): Thread=557 is superior to 500 2023-07-24 20:11:06,284 INFO [Listener at localhost/44137] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-24 20:11:06,290 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37057,DS-a1dc3f7c-998a-44b7-baa0-f422c70b9d26,DISK] 2023-07-24 20:11:06,295 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33623,DS-d7ddee52-5373-4544-a776-a93e442a51de,DISK] 2023-07-24 20:11:06,297 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36505,DS-b414c051-d105-46ff-83bd-6b34e4db37d0,DISK] 2023-07-24 20:11:06,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:06,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:06,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:11:06,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:11:06,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:11:06,299 INFO [RS:3;jenkins-hbase4:44007] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/WALs/jenkins-hbase4.apache.org,44007,1690229466058/jenkins-hbase4.apache.org%2C44007%2C1690229466058.1690229466269 2023-07-24 20:11:06,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 20:11:06,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:11:06,299 DEBUG [RS:3;jenkins-hbase4:44007] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37057,DS-a1dc3f7c-998a-44b7-baa0-f422c70b9d26,DISK], DatanodeInfoWithStorage[127.0.0.1:33623,DS-d7ddee52-5373-4544-a776-a93e442a51de,DISK], DatanodeInfoWithStorage[127.0.0.1:36505,DS-b414c051-d105-46ff-83bd-6b34e4db37d0,DISK]] 2023-07-24 20:11:06,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 20:11:06,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:06,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 20:11:06,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:11:06,309 INFO [Listener at localhost/44137] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 20:11:06,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 20:11:06,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:06,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:11:06,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:11:06,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:11:06,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:06,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:06,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34595] to rsgroup master 2023-07-24 20:11:06,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:11:06,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:59534 deadline: 1690230666321, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. 2023-07-24 20:11:06,322 WARN [Listener at localhost/44137] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 20:11:06,324 INFO [Listener at localhost/44137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:11:06,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:06,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:06,325 INFO [Listener at localhost/44137] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32969, jenkins-hbase4.apache.org:43627, jenkins-hbase4.apache.org:43869, jenkins-hbase4.apache.org:44007], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 20:11:06,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:11:06,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:11:06,327 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 20:11:06,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-24 20:11:06,329 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 20:11:06,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-24 20:11:06,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 20:11:06,331 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:06,331 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:11:06,332 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:11:06,334 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 20:11:06,335 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/.tmp/data/default/t1/881490a07c7903e57b3fabfcfe06199b 2023-07-24 20:11:06,336 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/.tmp/data/default/t1/881490a07c7903e57b3fabfcfe06199b empty. 2023-07-24 20:11:06,336 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/.tmp/data/default/t1/881490a07c7903e57b3fabfcfe06199b 2023-07-24 20:11:06,336 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-24 20:11:06,352 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-24 20:11:06,353 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 881490a07c7903e57b3fabfcfe06199b, NAME => 't1,,1690229466326.881490a07c7903e57b3fabfcfe06199b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/.tmp 2023-07-24 20:11:06,367 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1690229466326.881490a07c7903e57b3fabfcfe06199b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:11:06,367 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 881490a07c7903e57b3fabfcfe06199b, disabling compactions & flushes 2023-07-24 20:11:06,367 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1690229466326.881490a07c7903e57b3fabfcfe06199b. 2023-07-24 20:11:06,367 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1690229466326.881490a07c7903e57b3fabfcfe06199b. 2023-07-24 20:11:06,367 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1690229466326.881490a07c7903e57b3fabfcfe06199b. after waiting 0 ms 2023-07-24 20:11:06,367 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1690229466326.881490a07c7903e57b3fabfcfe06199b. 2023-07-24 20:11:06,367 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1690229466326.881490a07c7903e57b3fabfcfe06199b. 2023-07-24 20:11:06,367 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 881490a07c7903e57b3fabfcfe06199b: 2023-07-24 20:11:06,369 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 20:11:06,370 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1690229466326.881490a07c7903e57b3fabfcfe06199b.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690229466370"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229466370"}]},"ts":"1690229466370"} 2023-07-24 20:11:06,371 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 20:11:06,372 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 20:11:06,372 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229466372"}]},"ts":"1690229466372"} 2023-07-24 20:11:06,373 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-24 20:11:06,376 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 20:11:06,377 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 20:11:06,377 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 20:11:06,377 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 20:11:06,377 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-24 20:11:06,377 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 20:11:06,377 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=881490a07c7903e57b3fabfcfe06199b, ASSIGN}] 2023-07-24 20:11:06,378 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=881490a07c7903e57b3fabfcfe06199b, ASSIGN 2023-07-24 20:11:06,378 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=881490a07c7903e57b3fabfcfe06199b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44007,1690229466058; forceNewPlan=false, retain=false 2023-07-24 20:11:06,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 20:11:06,529 INFO [jenkins-hbase4:34595] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 20:11:06,530 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=881490a07c7903e57b3fabfcfe06199b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44007,1690229466058 2023-07-24 20:11:06,530 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1690229466326.881490a07c7903e57b3fabfcfe06199b.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690229466530"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229466530"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229466530"}]},"ts":"1690229466530"} 2023-07-24 20:11:06,532 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 881490a07c7903e57b3fabfcfe06199b, server=jenkins-hbase4.apache.org,44007,1690229466058}] 2023-07-24 20:11:06,592 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-24 20:11:06,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 20:11:06,685 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44007,1690229466058 2023-07-24 20:11:06,685 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 20:11:06,688 INFO [RS-EventLoopGroup-16-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48924, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 20:11:06,697 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1690229466326.881490a07c7903e57b3fabfcfe06199b. 2023-07-24 20:11:06,697 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 881490a07c7903e57b3fabfcfe06199b, NAME => 't1,,1690229466326.881490a07c7903e57b3fabfcfe06199b.', STARTKEY => '', ENDKEY => ''} 2023-07-24 20:11:06,697 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 881490a07c7903e57b3fabfcfe06199b 2023-07-24 20:11:06,697 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1690229466326.881490a07c7903e57b3fabfcfe06199b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 20:11:06,697 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 881490a07c7903e57b3fabfcfe06199b 2023-07-24 20:11:06,697 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 881490a07c7903e57b3fabfcfe06199b 2023-07-24 20:11:06,699 INFO [StoreOpener-881490a07c7903e57b3fabfcfe06199b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 881490a07c7903e57b3fabfcfe06199b 2023-07-24 20:11:06,700 DEBUG [StoreOpener-881490a07c7903e57b3fabfcfe06199b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/default/t1/881490a07c7903e57b3fabfcfe06199b/cf1 2023-07-24 20:11:06,700 DEBUG [StoreOpener-881490a07c7903e57b3fabfcfe06199b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/default/t1/881490a07c7903e57b3fabfcfe06199b/cf1 2023-07-24 20:11:06,701 INFO [StoreOpener-881490a07c7903e57b3fabfcfe06199b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 881490a07c7903e57b3fabfcfe06199b columnFamilyName cf1 2023-07-24 20:11:06,701 INFO [StoreOpener-881490a07c7903e57b3fabfcfe06199b-1] regionserver.HStore(310): Store=881490a07c7903e57b3fabfcfe06199b/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 20:11:06,702 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/default/t1/881490a07c7903e57b3fabfcfe06199b 2023-07-24 20:11:06,703 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/default/t1/881490a07c7903e57b3fabfcfe06199b 2023-07-24 20:11:06,706 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 881490a07c7903e57b3fabfcfe06199b 2023-07-24 20:11:06,709 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/default/t1/881490a07c7903e57b3fabfcfe06199b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 20:11:06,710 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 881490a07c7903e57b3fabfcfe06199b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10240231360, jitterRate=-0.046304136514663696}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 20:11:06,710 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 881490a07c7903e57b3fabfcfe06199b: 2023-07-24 20:11:06,711 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1690229466326.881490a07c7903e57b3fabfcfe06199b., pid=14, masterSystemTime=1690229466685 2023-07-24 20:11:06,715 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1690229466326.881490a07c7903e57b3fabfcfe06199b. 2023-07-24 20:11:06,715 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=881490a07c7903e57b3fabfcfe06199b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44007,1690229466058 2023-07-24 20:11:06,715 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1690229466326.881490a07c7903e57b3fabfcfe06199b.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690229466715"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690229466715"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690229466715"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690229466715"}]},"ts":"1690229466715"} 2023-07-24 20:11:06,718 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-24 20:11:06,718 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 881490a07c7903e57b3fabfcfe06199b, server=jenkins-hbase4.apache.org,44007,1690229466058 in 184 msec 2023-07-24 20:11:06,719 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1690229466326.881490a07c7903e57b3fabfcfe06199b. 2023-07-24 20:11:06,721 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-24 20:11:06,721 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=881490a07c7903e57b3fabfcfe06199b, ASSIGN in 341 msec 2023-07-24 20:11:06,722 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 20:11:06,722 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229466722"}]},"ts":"1690229466722"} 2023-07-24 20:11:06,723 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-24 20:11:06,728 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 20:11:06,730 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 402 msec 2023-07-24 20:11:06,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 20:11:06,934 INFO [Listener at localhost/44137] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-24 20:11:06,934 DEBUG [Listener at localhost/44137] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-24 20:11:06,934 INFO [Listener at localhost/44137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:11:06,936 INFO [Listener at localhost/44137] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-24 20:11:06,936 INFO [Listener at localhost/44137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:11:06,936 INFO [Listener at localhost/44137] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-24 20:11:06,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 20:11:06,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-24 20:11:06,941 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 20:11:06,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-24 20:11:06,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 352 connection: 172.31.14.131:59534 deadline: 1690229526938, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-24 20:11:06,943 INFO [Listener at localhost/44137] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:11:06,944 INFO [PEWorker-4] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=6 msec 2023-07-24 20:11:07,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:11:07,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:11:07,045 INFO [Listener at localhost/44137] client.HBaseAdmin$15(890): Started disable of t1 2023-07-24 20:11:07,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-24 20:11:07,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-24 20:11:07,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-24 20:11:07,049 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229467048"}]},"ts":"1690229467048"} 2023-07-24 20:11:07,050 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-24 20:11:07,051 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-24 20:11:07,052 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=881490a07c7903e57b3fabfcfe06199b, UNASSIGN}] 2023-07-24 20:11:07,052 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=881490a07c7903e57b3fabfcfe06199b, UNASSIGN 2023-07-24 20:11:07,053 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=881490a07c7903e57b3fabfcfe06199b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44007,1690229466058 2023-07-24 20:11:07,053 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1690229466326.881490a07c7903e57b3fabfcfe06199b.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690229467053"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690229467053"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690229467053"}]},"ts":"1690229467053"} 2023-07-24 20:11:07,054 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 881490a07c7903e57b3fabfcfe06199b, server=jenkins-hbase4.apache.org,44007,1690229466058}] 2023-07-24 20:11:07,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-24 20:11:07,206 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 881490a07c7903e57b3fabfcfe06199b 2023-07-24 20:11:07,210 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 881490a07c7903e57b3fabfcfe06199b, disabling compactions & flushes 2023-07-24 20:11:07,210 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1690229466326.881490a07c7903e57b3fabfcfe06199b. 2023-07-24 20:11:07,210 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1690229466326.881490a07c7903e57b3fabfcfe06199b. 2023-07-24 20:11:07,210 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1690229466326.881490a07c7903e57b3fabfcfe06199b. after waiting 0 ms 2023-07-24 20:11:07,210 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1690229466326.881490a07c7903e57b3fabfcfe06199b. 2023-07-24 20:11:07,213 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/default/t1/881490a07c7903e57b3fabfcfe06199b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 20:11:07,214 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1690229466326.881490a07c7903e57b3fabfcfe06199b. 2023-07-24 20:11:07,214 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 881490a07c7903e57b3fabfcfe06199b: 2023-07-24 20:11:07,215 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 881490a07c7903e57b3fabfcfe06199b 2023-07-24 20:11:07,216 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=881490a07c7903e57b3fabfcfe06199b, regionState=CLOSED 2023-07-24 20:11:07,216 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1690229466326.881490a07c7903e57b3fabfcfe06199b.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690229467216"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690229467216"}]},"ts":"1690229467216"} 2023-07-24 20:11:07,218 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-24 20:11:07,219 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 881490a07c7903e57b3fabfcfe06199b, server=jenkins-hbase4.apache.org,44007,1690229466058 in 163 msec 2023-07-24 20:11:07,220 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-24 20:11:07,220 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=881490a07c7903e57b3fabfcfe06199b, UNASSIGN in 167 msec 2023-07-24 20:11:07,221 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690229467221"}]},"ts":"1690229467221"} 2023-07-24 20:11:07,222 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-24 20:11:07,223 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-24 20:11:07,225 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 179 msec 2023-07-24 20:11:07,350 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-24 20:11:07,353 INFO [Listener at localhost/44137] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-24 20:11:07,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-24 20:11:07,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-24 20:11:07,357 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-24 20:11:07,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-24 20:11:07,358 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-24 20:11:07,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:07,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:11:07,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:11:07,361 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/.tmp/data/default/t1/881490a07c7903e57b3fabfcfe06199b 2023-07-24 20:11:07,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-24 20:11:07,363 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/.tmp/data/default/t1/881490a07c7903e57b3fabfcfe06199b/cf1, FileablePath, hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/.tmp/data/default/t1/881490a07c7903e57b3fabfcfe06199b/recovered.edits] 2023-07-24 20:11:07,369 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/.tmp/data/default/t1/881490a07c7903e57b3fabfcfe06199b/recovered.edits/4.seqid to hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/archive/data/default/t1/881490a07c7903e57b3fabfcfe06199b/recovered.edits/4.seqid 2023-07-24 20:11:07,369 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/.tmp/data/default/t1/881490a07c7903e57b3fabfcfe06199b 2023-07-24 20:11:07,369 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-24 20:11:07,372 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-24 20:11:07,373 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-24 20:11:07,375 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-24 20:11:07,376 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-24 20:11:07,376 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-24 20:11:07,376 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1690229466326.881490a07c7903e57b3fabfcfe06199b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690229467376"}]},"ts":"9223372036854775807"} 2023-07-24 20:11:07,377 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 20:11:07,377 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 881490a07c7903e57b3fabfcfe06199b, NAME => 't1,,1690229466326.881490a07c7903e57b3fabfcfe06199b.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 20:11:07,377 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-24 20:11:07,377 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690229467377"}]},"ts":"9223372036854775807"} 2023-07-24 20:11:07,378 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-24 20:11:07,382 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-24 20:11:07,383 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 28 msec 2023-07-24 20:11:07,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-24 20:11:07,463 INFO [Listener at localhost/44137] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-24 20:11:07,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:07,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:07,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:11:07,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:11:07,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:11:07,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 20:11:07,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:11:07,470 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 20:11:07,473 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:07,473 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 20:11:07,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:11:07,482 INFO [Listener at localhost/44137] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 20:11:07,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 20:11:07,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:07,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:11:07,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:11:07,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:11:07,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:07,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:07,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34595] to rsgroup master 2023-07-24 20:11:07,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:11:07,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:59534 deadline: 1690230667493, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. 2023-07-24 20:11:07,494 WARN [Listener at localhost/44137] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 20:11:07,498 INFO [Listener at localhost/44137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:11:07,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:07,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:07,499 INFO [Listener at localhost/44137] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32969, jenkins-hbase4.apache.org:43627, jenkins-hbase4.apache.org:43869, jenkins-hbase4.apache.org:44007], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 20:11:07,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:11:07,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:11:07,521 INFO [Listener at localhost/44137] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=567 (was 557) - Thread LEAK? -, OpenFileDescriptor=835 (was 816) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=376 (was 376), ProcessCount=176 (was 176), AvailableMemoryMB=5352 (was 5377) 2023-07-24 20:11:07,521 WARN [Listener at localhost/44137] hbase.ResourceChecker(130): Thread=567 is superior to 500 2023-07-24 20:11:07,541 INFO [Listener at localhost/44137] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=567, OpenFileDescriptor=835, MaxFileDescriptor=60000, SystemLoadAverage=376, ProcessCount=176, AvailableMemoryMB=5352 2023-07-24 20:11:07,541 WARN [Listener at localhost/44137] hbase.ResourceChecker(130): Thread=567 is superior to 500 2023-07-24 20:11:07,541 INFO [Listener at localhost/44137] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-24 20:11:07,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:07,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:07,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:11:07,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:11:07,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:11:07,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 20:11:07,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:11:07,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 20:11:07,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:07,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 20:11:07,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:11:07,557 INFO [Listener at localhost/44137] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 20:11:07,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 20:11:07,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:07,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:11:07,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:11:07,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:11:07,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:07,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:07,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34595] to rsgroup master 2023-07-24 20:11:07,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:11:07,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59534 deadline: 1690230667566, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. 2023-07-24 20:11:07,566 WARN [Listener at localhost/44137] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 20:11:07,568 INFO [Listener at localhost/44137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:11:07,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:07,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:07,569 INFO [Listener at localhost/44137] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32969, jenkins-hbase4.apache.org:43627, jenkins-hbase4.apache.org:43869, jenkins-hbase4.apache.org:44007], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 20:11:07,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:11:07,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:11:07,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-24 20:11:07,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 20:11:07,571 INFO [Listener at localhost/44137] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-24 20:11:07,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-24 20:11:07,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 20:11:07,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:07,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:07,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:11:07,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:11:07,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:11:07,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 20:11:07,581 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:11:07,581 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 20:11:07,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:07,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 20:11:07,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:11:07,591 INFO [Listener at localhost/44137] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 20:11:07,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 20:11:07,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:07,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:11:07,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:11:07,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:11:07,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:07,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:07,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34595] to rsgroup master 2023-07-24 20:11:07,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:11:07,601 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59534 deadline: 1690230667600, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. 2023-07-24 20:11:07,601 WARN [Listener at localhost/44137] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 20:11:07,602 INFO [Listener at localhost/44137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:11:07,603 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:07,603 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:07,603 INFO [Listener at localhost/44137] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32969, jenkins-hbase4.apache.org:43627, jenkins-hbase4.apache.org:43869, jenkins-hbase4.apache.org:44007], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 20:11:07,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:11:07,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:11:07,622 INFO [Listener at localhost/44137] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=569 (was 567) - Thread LEAK? -, OpenFileDescriptor=835 (was 835), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=376 (was 376), ProcessCount=176 (was 176), AvailableMemoryMB=5351 (was 5352) 2023-07-24 20:11:07,622 WARN [Listener at localhost/44137] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-24 20:11:07,643 INFO [Listener at localhost/44137] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=569, OpenFileDescriptor=835, MaxFileDescriptor=60000, SystemLoadAverage=376, ProcessCount=176, AvailableMemoryMB=5350 2023-07-24 20:11:07,643 WARN [Listener at localhost/44137] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-24 20:11:07,643 INFO [Listener at localhost/44137] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-24 20:11:07,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:07,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:07,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:11:07,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:11:07,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:11:07,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 20:11:07,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:11:07,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 20:11:07,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:07,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 20:11:07,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:11:07,658 INFO [Listener at localhost/44137] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 20:11:07,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 20:11:07,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:07,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:11:07,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:11:07,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:11:07,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:07,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:07,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34595] to rsgroup master 2023-07-24 20:11:07,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:11:07,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59534 deadline: 1690230667673, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. 2023-07-24 20:11:07,674 WARN [Listener at localhost/44137] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 20:11:07,675 INFO [Listener at localhost/44137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:11:07,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:07,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:07,676 INFO [Listener at localhost/44137] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32969, jenkins-hbase4.apache.org:43627, jenkins-hbase4.apache.org:43869, jenkins-hbase4.apache.org:44007], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 20:11:07,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:11:07,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:11:07,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:07,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:07,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:11:07,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:11:07,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:11:07,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 20:11:07,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:11:07,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 20:11:07,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:07,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 20:11:07,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:11:07,692 INFO [Listener at localhost/44137] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 20:11:07,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 20:11:07,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:07,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:11:07,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:11:07,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:11:07,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:07,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:07,701 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34595] to rsgroup master 2023-07-24 20:11:07,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:11:07,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59534 deadline: 1690230667701, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. 2023-07-24 20:11:07,701 WARN [Listener at localhost/44137] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 20:11:07,703 INFO [Listener at localhost/44137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:11:07,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:07,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:07,704 INFO [Listener at localhost/44137] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32969, jenkins-hbase4.apache.org:43627, jenkins-hbase4.apache.org:43869, jenkins-hbase4.apache.org:44007], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 20:11:07,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:11:07,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:11:07,725 INFO [Listener at localhost/44137] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=568 (was 569), OpenFileDescriptor=833 (was 835), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=376 (was 376), ProcessCount=176 (was 176), AvailableMemoryMB=5351 (was 5350) - AvailableMemoryMB LEAK? - 2023-07-24 20:11:07,725 WARN [Listener at localhost/44137] hbase.ResourceChecker(130): Thread=568 is superior to 500 2023-07-24 20:11:07,742 INFO [Listener at localhost/44137] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=568, OpenFileDescriptor=833, MaxFileDescriptor=60000, SystemLoadAverage=376, ProcessCount=176, AvailableMemoryMB=5349 2023-07-24 20:11:07,743 WARN [Listener at localhost/44137] hbase.ResourceChecker(130): Thread=568 is superior to 500 2023-07-24 20:11:07,743 INFO [Listener at localhost/44137] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-24 20:11:07,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:07,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:07,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:11:07,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:11:07,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:11:07,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 20:11:07,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:11:07,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 20:11:07,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:07,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 20:11:07,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:11:07,756 INFO [Listener at localhost/44137] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 20:11:07,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 20:11:07,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:07,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:11:07,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:11:07,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:11:07,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:07,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:07,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34595] to rsgroup master 2023-07-24 20:11:07,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:11:07,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59534 deadline: 1690230667765, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. 2023-07-24 20:11:07,765 WARN [Listener at localhost/44137] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 20:11:07,767 INFO [Listener at localhost/44137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:11:07,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:07,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:07,768 INFO [Listener at localhost/44137] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32969, jenkins-hbase4.apache.org:43627, jenkins-hbase4.apache.org:43869, jenkins-hbase4.apache.org:44007], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 20:11:07,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:11:07,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:11:07,769 INFO [Listener at localhost/44137] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-24 20:11:07,769 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-24 20:11:07,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-24 20:11:07,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:07,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:11:07,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 20:11:07,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:11:07,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:07,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:07,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-24 20:11:07,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-24 20:11:07,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 20:11:07,785 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 20:11:07,791 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 9 msec 2023-07-24 20:11:07,882 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 20:11:07,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-24 20:11:07,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:11:07,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:59534 deadline: 1690230667883, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-24 20:11:07,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-24 20:11:07,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-24 20:11:07,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-24 20:11:07,906 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-24 20:11:07,913 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 16 msec 2023-07-24 20:11:08,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-24 20:11:08,005 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-24 20:11:08,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-24 20:11:08,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:08,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-24 20:11:08,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:11:08,010 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 20:11:08,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:11:08,016 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:08,016 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:08,018 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-24 20:11:08,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 20:11:08,021 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 20:11:08,024 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 20:11:08,024 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-24 20:11:08,025 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 20:11:08,026 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-24 20:11:08,026 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 20:11:08,027 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 20:11:08,029 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 20:11:08,030 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-24 20:11:08,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-24 20:11:08,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-24 20:11:08,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-24 20:11:08,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:08,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:11:08,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-24 20:11:08,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:11:08,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:08,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:08,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:11:08,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:59534 deadline: 1690229528139, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-24 20:11:08,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:08,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:08,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:11:08,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:11:08,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:11:08,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 20:11:08,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:11:08,147 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-24 20:11:08,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:08,152 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:11:08,152 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 20:11:08,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:11:08,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 20:11:08,156 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 20:11:08,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 20:11:08,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 20:11:08,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 20:11:08,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 20:11:08,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:08,161 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 20:11:08,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 20:11:08,165 INFO [Listener at localhost/44137] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 20:11:08,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 20:11:08,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 20:11:08,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 20:11:08,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 20:11:08,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 20:11:08,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:08,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:08,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34595] to rsgroup master 2023-07-24 20:11:08,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 20:11:08,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59534 deadline: 1690230668173, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. 2023-07-24 20:11:08,174 WARN [Listener at localhost/44137] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34595 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 20:11:08,175 INFO [Listener at localhost/44137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 20:11:08,176 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 20:11:08,176 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 20:11:08,176 INFO [Listener at localhost/44137] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32969, jenkins-hbase4.apache.org:43627, jenkins-hbase4.apache.org:43869, jenkins-hbase4.apache.org:44007], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 20:11:08,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 20:11:08,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34595] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 20:11:08,195 INFO [Listener at localhost/44137] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=568 (was 568), OpenFileDescriptor=833 (was 833), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=376 (was 376), ProcessCount=176 (was 176), AvailableMemoryMB=5350 (was 5349) - AvailableMemoryMB LEAK? - 2023-07-24 20:11:08,195 WARN [Listener at localhost/44137] hbase.ResourceChecker(130): Thread=568 is superior to 500 2023-07-24 20:11:08,195 INFO [Listener at localhost/44137] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-24 20:11:08,195 INFO [Listener at localhost/44137] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-24 20:11:08,195 DEBUG [Listener at localhost/44137] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4a74bb74 to 127.0.0.1:60607 2023-07-24 20:11:08,195 DEBUG [Listener at localhost/44137] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:11:08,195 DEBUG [Listener at localhost/44137] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-24 20:11:08,195 DEBUG [Listener at localhost/44137] util.JVMClusterUtil(257): Found active master hash=1353079353, stopped=false 2023-07-24 20:11:08,195 DEBUG [Listener at localhost/44137] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 20:11:08,195 DEBUG [Listener at localhost/44137] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 20:11:08,195 INFO [Listener at localhost/44137] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,34595,1690229464695 2023-07-24 20:11:08,198 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:43869-0x10198f561670003, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 20:11:08,198 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 20:11:08,198 INFO [Listener at localhost/44137] procedure2.ProcedureExecutor(629): Stopping 2023-07-24 20:11:08,198 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:32969-0x10198f561670002, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 20:11:08,198 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:44007-0x10198f56167000b, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 20:11:08,198 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:43627-0x10198f561670001, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 20:11:08,198 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:11:08,199 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43869-0x10198f561670003, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 20:11:08,199 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 20:11:08,199 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:32969-0x10198f561670002, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 20:11:08,199 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44007-0x10198f56167000b, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 20:11:08,199 DEBUG [Listener at localhost/44137] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x43ff456d to 127.0.0.1:60607 2023-07-24 20:11:08,199 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43627-0x10198f561670001, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 20:11:08,199 DEBUG [Listener at localhost/44137] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:11:08,199 INFO [Listener at localhost/44137] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43627,1690229464751' ***** 2023-07-24 20:11:08,199 INFO [Listener at localhost/44137] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 20:11:08,199 INFO [Listener at localhost/44137] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,32969,1690229464792' ***** 2023-07-24 20:11:08,199 INFO [Listener at localhost/44137] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 20:11:08,199 INFO [RS:0;jenkins-hbase4:43627] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 20:11:08,201 INFO [Listener at localhost/44137] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43869,1690229464828' ***** 2023-07-24 20:11:08,201 INFO [Listener at localhost/44137] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 20:11:08,201 INFO [Listener at localhost/44137] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44007,1690229466058' ***** 2023-07-24 20:11:08,203 INFO [Listener at localhost/44137] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 20:11:08,201 INFO [RS:1;jenkins-hbase4:32969] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 20:11:08,203 INFO [RS:3;jenkins-hbase4:44007] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 20:11:08,201 INFO [RS:2;jenkins-hbase4:43869] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 20:11:08,206 INFO [RS:0;jenkins-hbase4:43627] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@48ff2726{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 20:11:08,209 INFO [RS:1;jenkins-hbase4:32969] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4b90518a{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 20:11:08,209 INFO [RS:3;jenkins-hbase4:44007] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@76830b95{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 20:11:08,210 INFO [RS:0;jenkins-hbase4:43627] server.AbstractConnector(383): Stopped ServerConnector@2b30ac8{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 20:11:08,209 INFO [RS:2;jenkins-hbase4:43869] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@761c23c7{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 20:11:08,210 INFO [RS:1;jenkins-hbase4:32969] server.AbstractConnector(383): Stopped ServerConnector@41c3303a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 20:11:08,211 INFO [RS:3;jenkins-hbase4:44007] server.AbstractConnector(383): Stopped ServerConnector@213b3a83{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 20:11:08,210 INFO [RS:0;jenkins-hbase4:43627] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 20:11:08,211 INFO [RS:2;jenkins-hbase4:43869] server.AbstractConnector(383): Stopped ServerConnector@24c740ab{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 20:11:08,211 INFO [RS:3;jenkins-hbase4:44007] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 20:11:08,211 INFO [RS:1;jenkins-hbase4:32969] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 20:11:08,211 INFO [RS:0;jenkins-hbase4:43627] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@f1a837d{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 20:11:08,211 INFO [RS:2;jenkins-hbase4:43869] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 20:11:08,213 INFO [RS:1;jenkins-hbase4:32969] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@174373f0{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 20:11:08,214 INFO [RS:2;jenkins-hbase4:43869] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5aabd8da{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 20:11:08,215 INFO [RS:1;jenkins-hbase4:32969] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5021e219{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/hadoop.log.dir/,STOPPED} 2023-07-24 20:11:08,212 INFO [RS:3;jenkins-hbase4:44007] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@209bd7d3{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 20:11:08,215 INFO [RS:2;jenkins-hbase4:43869] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@17ba8aa9{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/hadoop.log.dir/,STOPPED} 2023-07-24 20:11:08,213 INFO [RS:0;jenkins-hbase4:43627] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@463f05bf{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/hadoop.log.dir/,STOPPED} 2023-07-24 20:11:08,216 INFO [RS:3;jenkins-hbase4:44007] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@27cf6cf2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/hadoop.log.dir/,STOPPED} 2023-07-24 20:11:08,217 INFO [RS:3;jenkins-hbase4:44007] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 20:11:08,217 INFO [RS:3;jenkins-hbase4:44007] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 20:11:08,217 INFO [RS:3;jenkins-hbase4:44007] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 20:11:08,217 INFO [RS:3;jenkins-hbase4:44007] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44007,1690229466058 2023-07-24 20:11:08,217 DEBUG [RS:3;jenkins-hbase4:44007] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x229585af to 127.0.0.1:60607 2023-07-24 20:11:08,217 DEBUG [RS:3;jenkins-hbase4:44007] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:11:08,217 INFO [RS:3;jenkins-hbase4:44007] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44007,1690229466058; all regions closed. 2023-07-24 20:11:08,217 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 20:11:08,221 INFO [RS:0;jenkins-hbase4:43627] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 20:11:08,221 INFO [RS:1;jenkins-hbase4:32969] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 20:11:08,221 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 20:11:08,221 INFO [RS:0;jenkins-hbase4:43627] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 20:11:08,221 INFO [RS:0;jenkins-hbase4:43627] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 20:11:08,221 INFO [RS:1;jenkins-hbase4:32969] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 20:11:08,222 INFO [RS:1;jenkins-hbase4:32969] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 20:11:08,222 INFO [RS:1;jenkins-hbase4:32969] regionserver.HRegionServer(3305): Received CLOSE for 8748e89fab51a048dc1724dc83ecc654 2023-07-24 20:11:08,222 INFO [RS:0;jenkins-hbase4:43627] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43627,1690229464751 2023-07-24 20:11:08,221 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 20:11:08,222 DEBUG [RS:0;jenkins-hbase4:43627] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6c555cba to 127.0.0.1:60607 2023-07-24 20:11:08,222 DEBUG [RS:0;jenkins-hbase4:43627] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:11:08,222 INFO [RS:0;jenkins-hbase4:43627] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43627,1690229464751; all regions closed. 2023-07-24 20:11:08,223 DEBUG [RS:3;jenkins-hbase4:44007] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/oldWALs 2023-07-24 20:11:08,223 INFO [RS:3;jenkins-hbase4:44007] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44007%2C1690229466058:(num 1690229466269) 2023-07-24 20:11:08,223 DEBUG [RS:3;jenkins-hbase4:44007] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:11:08,223 INFO [RS:1;jenkins-hbase4:32969] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,32969,1690229464792 2023-07-24 20:11:08,223 INFO [RS:2;jenkins-hbase4:43869] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 20:11:08,223 DEBUG [RS:1;jenkins-hbase4:32969] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3e956e84 to 127.0.0.1:60607 2023-07-24 20:11:08,223 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 20:11:08,223 INFO [RS:3;jenkins-hbase4:44007] regionserver.LeaseManager(133): Closed leases 2023-07-24 20:11:08,223 DEBUG [RS:1;jenkins-hbase4:32969] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:11:08,223 INFO [RS:2;jenkins-hbase4:43869] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 20:11:08,223 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8748e89fab51a048dc1724dc83ecc654, disabling compactions & flushes 2023-07-24 20:11:08,223 INFO [RS:2;jenkins-hbase4:43869] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 20:11:08,223 INFO [RS:1;jenkins-hbase4:32969] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 20:11:08,223 INFO [RS:1;jenkins-hbase4:32969] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 20:11:08,224 INFO [RS:1;jenkins-hbase4:32969] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 20:11:08,224 INFO [RS:1;jenkins-hbase4:32969] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-24 20:11:08,223 INFO [RS:2;jenkins-hbase4:43869] regionserver.HRegionServer(3305): Received CLOSE for 473d7277b89064b213c46acf5599c6bb 2023-07-24 20:11:08,224 INFO [RS:1;jenkins-hbase4:32969] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-24 20:11:08,224 DEBUG [RS:1;jenkins-hbase4:32969] regionserver.HRegionServer(1478): Online Regions={8748e89fab51a048dc1724dc83ecc654=hbase:rsgroup,,1690229465648.8748e89fab51a048dc1724dc83ecc654., 1588230740=hbase:meta,,1.1588230740} 2023-07-24 20:11:08,224 INFO [RS:2;jenkins-hbase4:43869] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43869,1690229464828 2023-07-24 20:11:08,223 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690229465648.8748e89fab51a048dc1724dc83ecc654. 2023-07-24 20:11:08,224 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690229465648.8748e89fab51a048dc1724dc83ecc654. 2023-07-24 20:11:08,224 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690229465648.8748e89fab51a048dc1724dc83ecc654. after waiting 0 ms 2023-07-24 20:11:08,224 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690229465648.8748e89fab51a048dc1724dc83ecc654. 2023-07-24 20:11:08,224 DEBUG [RS:1;jenkins-hbase4:32969] regionserver.HRegionServer(1504): Waiting on 1588230740, 8748e89fab51a048dc1724dc83ecc654 2023-07-24 20:11:08,224 DEBUG [RS:2;jenkins-hbase4:43869] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x035458c9 to 127.0.0.1:60607 2023-07-24 20:11:08,224 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 20:11:08,224 INFO [RS:3;jenkins-hbase4:44007] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 20:11:08,224 INFO [RS:3;jenkins-hbase4:44007] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 20:11:08,224 INFO [RS:3;jenkins-hbase4:44007] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 20:11:08,224 INFO [RS:3;jenkins-hbase4:44007] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 20:11:08,224 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 20:11:08,224 DEBUG [RS:2;jenkins-hbase4:43869] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:11:08,224 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 8748e89fab51a048dc1724dc83ecc654 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-24 20:11:08,224 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 473d7277b89064b213c46acf5599c6bb, disabling compactions & flushes 2023-07-24 20:11:08,227 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690229465498.473d7277b89064b213c46acf5599c6bb. 2023-07-24 20:11:08,227 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690229465498.473d7277b89064b213c46acf5599c6bb. 2023-07-24 20:11:08,227 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690229465498.473d7277b89064b213c46acf5599c6bb. after waiting 0 ms 2023-07-24 20:11:08,227 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690229465498.473d7277b89064b213c46acf5599c6bb. 2023-07-24 20:11:08,227 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 473d7277b89064b213c46acf5599c6bb 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-24 20:11:08,226 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 20:11:08,226 INFO [RS:2;jenkins-hbase4:43869] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-24 20:11:08,226 INFO [RS:3;jenkins-hbase4:44007] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44007 2023-07-24 20:11:08,224 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 20:11:08,227 DEBUG [RS:2;jenkins-hbase4:43869] regionserver.HRegionServer(1478): Online Regions={473d7277b89064b213c46acf5599c6bb=hbase:namespace,,1690229465498.473d7277b89064b213c46acf5599c6bb.} 2023-07-24 20:11:08,227 DEBUG [RS:2;jenkins-hbase4:43869] regionserver.HRegionServer(1504): Waiting on 473d7277b89064b213c46acf5599c6bb 2023-07-24 20:11:08,227 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 20:11:08,228 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 20:11:08,228 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-24 20:11:08,230 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:43627-0x10198f561670001, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44007,1690229466058 2023-07-24 20:11:08,230 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:43869-0x10198f561670003, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44007,1690229466058 2023-07-24 20:11:08,230 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:43627-0x10198f561670001, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:11:08,230 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:43869-0x10198f561670003, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:11:08,231 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:44007-0x10198f56167000b, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44007,1690229466058 2023-07-24 20:11:08,230 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:32969-0x10198f561670002, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44007,1690229466058 2023-07-24 20:11:08,231 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:44007-0x10198f56167000b, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:11:08,231 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:32969-0x10198f561670002, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:11:08,231 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:11:08,234 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44007,1690229466058] 2023-07-24 20:11:08,234 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44007,1690229466058; numProcessing=1 2023-07-24 20:11:08,235 DEBUG [RS:0;jenkins-hbase4:43627] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/oldWALs 2023-07-24 20:11:08,235 INFO [RS:0;jenkins-hbase4:43627] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43627%2C1690229464751:(num 1690229465292) 2023-07-24 20:11:08,235 DEBUG [RS:0;jenkins-hbase4:43627] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:11:08,235 INFO [RS:0;jenkins-hbase4:43627] regionserver.LeaseManager(133): Closed leases 2023-07-24 20:11:08,236 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44007,1690229466058 already deleted, retry=false 2023-07-24 20:11:08,236 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44007,1690229466058 expired; onlineServers=3 2023-07-24 20:11:08,241 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 20:11:08,242 INFO [RS:0;jenkins-hbase4:43627] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 20:11:08,242 INFO [RS:0;jenkins-hbase4:43627] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 20:11:08,243 INFO [RS:0;jenkins-hbase4:43627] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 20:11:08,243 INFO [RS:0;jenkins-hbase4:43627] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 20:11:08,242 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 20:11:08,244 INFO [RS:0;jenkins-hbase4:43627] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43627 2023-07-24 20:11:08,246 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 20:11:08,254 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:43627-0x10198f561670001, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43627,1690229464751 2023-07-24 20:11:08,254 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:43869-0x10198f561670003, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43627,1690229464751 2023-07-24 20:11:08,254 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 20:11:08,254 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:11:08,254 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:32969-0x10198f561670002, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43627,1690229464751 2023-07-24 20:11:08,254 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43627,1690229464751] 2023-07-24 20:11:08,254 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43627,1690229464751; numProcessing=2 2023-07-24 20:11:08,256 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43627,1690229464751 already deleted, retry=false 2023-07-24 20:11:08,256 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43627,1690229464751 expired; onlineServers=2 2023-07-24 20:11:08,269 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 20:11:08,271 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740/.tmp/info/51bb4cd14f694423abdb18689ef9020d 2023-07-24 20:11:08,271 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/rsgroup/8748e89fab51a048dc1724dc83ecc654/.tmp/m/aae7b01d87e641899c575a307cc9bae5 2023-07-24 20:11:08,274 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/namespace/473d7277b89064b213c46acf5599c6bb/.tmp/info/5552ae4cbfe34089944dd60fba5403d5 2023-07-24 20:11:08,333 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 51bb4cd14f694423abdb18689ef9020d 2023-07-24 20:11:08,333 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for aae7b01d87e641899c575a307cc9bae5 2023-07-24 20:11:08,334 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/rsgroup/8748e89fab51a048dc1724dc83ecc654/.tmp/m/aae7b01d87e641899c575a307cc9bae5 as hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/rsgroup/8748e89fab51a048dc1724dc83ecc654/m/aae7b01d87e641899c575a307cc9bae5 2023-07-24 20:11:08,335 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5552ae4cbfe34089944dd60fba5403d5 2023-07-24 20:11:08,336 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/namespace/473d7277b89064b213c46acf5599c6bb/.tmp/info/5552ae4cbfe34089944dd60fba5403d5 as hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/namespace/473d7277b89064b213c46acf5599c6bb/info/5552ae4cbfe34089944dd60fba5403d5 2023-07-24 20:11:08,341 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for aae7b01d87e641899c575a307cc9bae5 2023-07-24 20:11:08,341 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/rsgroup/8748e89fab51a048dc1724dc83ecc654/m/aae7b01d87e641899c575a307cc9bae5, entries=12, sequenceid=29, filesize=5.4 K 2023-07-24 20:11:08,342 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for 8748e89fab51a048dc1724dc83ecc654 in 118ms, sequenceid=29, compaction requested=false 2023-07-24 20:11:08,344 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5552ae4cbfe34089944dd60fba5403d5 2023-07-24 20:11:08,344 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/namespace/473d7277b89064b213c46acf5599c6bb/info/5552ae4cbfe34089944dd60fba5403d5, entries=3, sequenceid=9, filesize=5.0 K 2023-07-24 20:11:08,344 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 473d7277b89064b213c46acf5599c6bb in 117ms, sequenceid=9, compaction requested=false 2023-07-24 20:11:08,351 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740/.tmp/rep_barrier/35c1f14d86e54406844a219fb5fefa08 2023-07-24 20:11:08,357 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/rsgroup/8748e89fab51a048dc1724dc83ecc654/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-24 20:11:08,357 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/namespace/473d7277b89064b213c46acf5599c6bb/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-24 20:11:08,358 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 20:11:08,358 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690229465498.473d7277b89064b213c46acf5599c6bb. 2023-07-24 20:11:08,358 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690229465648.8748e89fab51a048dc1724dc83ecc654. 2023-07-24 20:11:08,359 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 473d7277b89064b213c46acf5599c6bb: 2023-07-24 20:11:08,359 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8748e89fab51a048dc1724dc83ecc654: 2023-07-24 20:11:08,359 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690229465498.473d7277b89064b213c46acf5599c6bb. 2023-07-24 20:11:08,359 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690229465648.8748e89fab51a048dc1724dc83ecc654. 2023-07-24 20:11:08,361 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 35c1f14d86e54406844a219fb5fefa08 2023-07-24 20:11:08,372 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740/.tmp/table/782382f2796a4f969badcd777708175b 2023-07-24 20:11:08,377 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 782382f2796a4f969badcd777708175b 2023-07-24 20:11:08,378 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740/.tmp/info/51bb4cd14f694423abdb18689ef9020d as hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740/info/51bb4cd14f694423abdb18689ef9020d 2023-07-24 20:11:08,383 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 51bb4cd14f694423abdb18689ef9020d 2023-07-24 20:11:08,383 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740/info/51bb4cd14f694423abdb18689ef9020d, entries=22, sequenceid=26, filesize=7.3 K 2023-07-24 20:11:08,384 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740/.tmp/rep_barrier/35c1f14d86e54406844a219fb5fefa08 as hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740/rep_barrier/35c1f14d86e54406844a219fb5fefa08 2023-07-24 20:11:08,389 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 35c1f14d86e54406844a219fb5fefa08 2023-07-24 20:11:08,389 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740/rep_barrier/35c1f14d86e54406844a219fb5fefa08, entries=1, sequenceid=26, filesize=4.9 K 2023-07-24 20:11:08,389 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740/.tmp/table/782382f2796a4f969badcd777708175b as hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740/table/782382f2796a4f969badcd777708175b 2023-07-24 20:11:08,394 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 782382f2796a4f969badcd777708175b 2023-07-24 20:11:08,394 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740/table/782382f2796a4f969badcd777708175b, entries=6, sequenceid=26, filesize=5.1 K 2023-07-24 20:11:08,395 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 167ms, sequenceid=26, compaction requested=false 2023-07-24 20:11:08,407 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-24 20:11:08,407 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 20:11:08,408 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 20:11:08,408 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 20:11:08,408 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-24 20:11:08,424 INFO [RS:1;jenkins-hbase4:32969] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,32969,1690229464792; all regions closed. 2023-07-24 20:11:08,428 INFO [RS:2;jenkins-hbase4:43869] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43869,1690229464828; all regions closed. 2023-07-24 20:11:08,433 DEBUG [RS:1;jenkins-hbase4:32969] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/oldWALs 2023-07-24 20:11:08,433 INFO [RS:1;jenkins-hbase4:32969] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C32969%2C1690229464792.meta:.meta(num 1690229465407) 2023-07-24 20:11:08,433 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:43627-0x10198f561670001, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:11:08,433 INFO [RS:0;jenkins-hbase4:43627] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43627,1690229464751; zookeeper connection closed. 2023-07-24 20:11:08,433 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:43627-0x10198f561670001, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:11:08,434 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@602279f8] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@602279f8 2023-07-24 20:11:08,436 DEBUG [RS:2;jenkins-hbase4:43869] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/oldWALs 2023-07-24 20:11:08,437 INFO [RS:2;jenkins-hbase4:43869] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43869%2C1690229464828:(num 1690229465304) 2023-07-24 20:11:08,437 DEBUG [RS:2;jenkins-hbase4:43869] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:11:08,437 INFO [RS:2;jenkins-hbase4:43869] regionserver.LeaseManager(133): Closed leases 2023-07-24 20:11:08,438 INFO [RS:2;jenkins-hbase4:43869] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 20:11:08,438 INFO [RS:2;jenkins-hbase4:43869] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 20:11:08,438 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 20:11:08,438 INFO [RS:2;jenkins-hbase4:43869] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 20:11:08,438 INFO [RS:2;jenkins-hbase4:43869] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 20:11:08,439 INFO [RS:2;jenkins-hbase4:43869] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43869 2023-07-24 20:11:08,441 DEBUG [RS:1;jenkins-hbase4:32969] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/oldWALs 2023-07-24 20:11:08,441 INFO [RS:1;jenkins-hbase4:32969] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C32969%2C1690229464792:(num 1690229465283) 2023-07-24 20:11:08,441 DEBUG [RS:1;jenkins-hbase4:32969] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:11:08,441 INFO [RS:1;jenkins-hbase4:32969] regionserver.LeaseManager(133): Closed leases 2023-07-24 20:11:08,441 INFO [RS:1;jenkins-hbase4:32969] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 20:11:08,441 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:43869-0x10198f561670003, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43869,1690229464828 2023-07-24 20:11:08,441 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:11:08,441 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 20:11:08,441 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:32969-0x10198f561670002, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43869,1690229464828 2023-07-24 20:11:08,443 INFO [RS:1;jenkins-hbase4:32969] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:32969 2023-07-24 20:11:08,444 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43869,1690229464828] 2023-07-24 20:11:08,444 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43869,1690229464828; numProcessing=3 2023-07-24 20:11:08,446 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43869,1690229464828 already deleted, retry=false 2023-07-24 20:11:08,446 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43869,1690229464828 expired; onlineServers=1 2023-07-24 20:11:08,452 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:32969-0x10198f561670002, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32969,1690229464792 2023-07-24 20:11:08,452 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 20:11:08,453 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,32969,1690229464792] 2023-07-24 20:11:08,453 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,32969,1690229464792; numProcessing=4 2023-07-24 20:11:08,454 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,32969,1690229464792 already deleted, retry=false 2023-07-24 20:11:08,454 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,32969,1690229464792 expired; onlineServers=0 2023-07-24 20:11:08,454 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34595,1690229464695' ***** 2023-07-24 20:11:08,454 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-24 20:11:08,454 DEBUG [M:0;jenkins-hbase4:34595] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5499c4f5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 20:11:08,454 INFO [M:0;jenkins-hbase4:34595] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 20:11:08,456 INFO [M:0;jenkins-hbase4:34595] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@52f44e2f{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-24 20:11:08,457 INFO [M:0;jenkins-hbase4:34595] server.AbstractConnector(383): Stopped ServerConnector@1902cfed{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 20:11:08,457 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-24 20:11:08,457 INFO [M:0;jenkins-hbase4:34595] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 20:11:08,457 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 20:11:08,458 INFO [M:0;jenkins-hbase4:34595] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@23b1628c{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 20:11:08,458 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 20:11:08,458 INFO [M:0;jenkins-hbase4:34595] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@576b5e7e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/hadoop.log.dir/,STOPPED} 2023-07-24 20:11:08,459 INFO [M:0;jenkins-hbase4:34595] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34595,1690229464695 2023-07-24 20:11:08,459 INFO [M:0;jenkins-hbase4:34595] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34595,1690229464695; all regions closed. 2023-07-24 20:11:08,459 DEBUG [M:0;jenkins-hbase4:34595] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 20:11:08,459 INFO [M:0;jenkins-hbase4:34595] master.HMaster(1491): Stopping master jetty server 2023-07-24 20:11:08,460 INFO [M:0;jenkins-hbase4:34595] server.AbstractConnector(383): Stopped ServerConnector@2171fde7{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 20:11:08,460 DEBUG [M:0;jenkins-hbase4:34595] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-24 20:11:08,460 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-24 20:11:08,460 DEBUG [M:0;jenkins-hbase4:34595] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-24 20:11:08,460 INFO [M:0;jenkins-hbase4:34595] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-24 20:11:08,460 INFO [M:0;jenkins-hbase4:34595] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-24 20:11:08,460 INFO [M:0;jenkins-hbase4:34595] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-24 20:11:08,460 DEBUG [M:0;jenkins-hbase4:34595] master.HMaster(1512): Stopping service threads 2023-07-24 20:11:08,460 INFO [M:0;jenkins-hbase4:34595] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-24 20:11:08,460 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690229465038] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690229465038,5,FailOnTimeoutGroup] 2023-07-24 20:11:08,460 ERROR [M:0;jenkins-hbase4:34595] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-24 20:11:08,461 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690229465038] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690229465038,5,FailOnTimeoutGroup] 2023-07-24 20:11:08,461 INFO [M:0;jenkins-hbase4:34595] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-24 20:11:08,461 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-24 20:11:08,461 DEBUG [M:0;jenkins-hbase4:34595] zookeeper.ZKUtil(398): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-24 20:11:08,461 WARN [M:0;jenkins-hbase4:34595] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-24 20:11:08,461 INFO [M:0;jenkins-hbase4:34595] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-24 20:11:08,461 INFO [M:0;jenkins-hbase4:34595] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-24 20:11:08,461 DEBUG [M:0;jenkins-hbase4:34595] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 20:11:08,461 INFO [M:0;jenkins-hbase4:34595] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 20:11:08,461 DEBUG [M:0;jenkins-hbase4:34595] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 20:11:08,461 DEBUG [M:0;jenkins-hbase4:34595] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 20:11:08,461 DEBUG [M:0;jenkins-hbase4:34595] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 20:11:08,461 INFO [M:0;jenkins-hbase4:34595] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.18 KB heapSize=90.62 KB 2023-07-24 20:11:08,473 INFO [M:0;jenkins-hbase4:34595] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.18 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/048012934947473f924e6d47663d8e87 2023-07-24 20:11:08,481 DEBUG [M:0;jenkins-hbase4:34595] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/048012934947473f924e6d47663d8e87 as hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/048012934947473f924e6d47663d8e87 2023-07-24 20:11:08,487 INFO [M:0;jenkins-hbase4:34595] regionserver.HStore(1080): Added hdfs://localhost:39673/user/jenkins/test-data/74278698-4a13-dc9d-890f-e5e37d397dd0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/048012934947473f924e6d47663d8e87, entries=22, sequenceid=175, filesize=11.1 K 2023-07-24 20:11:08,488 INFO [M:0;jenkins-hbase4:34595] regionserver.HRegion(2948): Finished flush of dataSize ~76.18 KB/78004, heapSize ~90.60 KB/92776, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 27ms, sequenceid=175, compaction requested=false 2023-07-24 20:11:08,490 INFO [M:0;jenkins-hbase4:34595] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 20:11:08,490 DEBUG [M:0;jenkins-hbase4:34595] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 20:11:08,499 INFO [M:0;jenkins-hbase4:34595] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-24 20:11:08,499 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 20:11:08,500 INFO [M:0;jenkins-hbase4:34595] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34595 2023-07-24 20:11:08,503 DEBUG [M:0;jenkins-hbase4:34595] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,34595,1690229464695 already deleted, retry=false 2023-07-24 20:11:08,534 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:44007-0x10198f56167000b, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:11:08,534 INFO [RS:3;jenkins-hbase4:44007] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44007,1690229466058; zookeeper connection closed. 2023-07-24 20:11:08,534 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:44007-0x10198f56167000b, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:11:08,534 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2a02ab50] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2a02ab50 2023-07-24 20:11:09,135 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:11:09,135 INFO [M:0;jenkins-hbase4:34595] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34595,1690229464695; zookeeper connection closed. 2023-07-24 20:11:09,135 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): master:34595-0x10198f561670000, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:11:09,235 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:32969-0x10198f561670002, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:11:09,235 INFO [RS:1;jenkins-hbase4:32969] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,32969,1690229464792; zookeeper connection closed. 2023-07-24 20:11:09,235 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:32969-0x10198f561670002, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:11:09,236 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@58893535] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@58893535 2023-07-24 20:11:09,335 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:43869-0x10198f561670003, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:11:09,335 INFO [RS:2;jenkins-hbase4:43869] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43869,1690229464828; zookeeper connection closed. 2023-07-24 20:11:09,336 DEBUG [Listener at localhost/44137-EventThread] zookeeper.ZKWatcher(600): regionserver:43869-0x10198f561670003, quorum=127.0.0.1:60607, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 20:11:09,336 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4d897623] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4d897623 2023-07-24 20:11:09,336 INFO [Listener at localhost/44137] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-24 20:11:09,337 WARN [Listener at localhost/44137] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 20:11:09,349 INFO [Listener at localhost/44137] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 20:11:09,452 WARN [BP-1801566786-172.31.14.131-1690229463922 heartbeating to localhost/127.0.0.1:39673] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 20:11:09,452 WARN [BP-1801566786-172.31.14.131-1690229463922 heartbeating to localhost/127.0.0.1:39673] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1801566786-172.31.14.131-1690229463922 (Datanode Uuid 41719b79-33e0-404f-8b74-6c1ff3cb401d) service to localhost/127.0.0.1:39673 2023-07-24 20:11:09,453 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/cluster_90be22ae-8281-7d6a-0dfd-0edb5b842584/dfs/data/data5/current/BP-1801566786-172.31.14.131-1690229463922] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 20:11:09,453 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/cluster_90be22ae-8281-7d6a-0dfd-0edb5b842584/dfs/data/data6/current/BP-1801566786-172.31.14.131-1690229463922] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 20:11:09,454 WARN [Listener at localhost/44137] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 20:11:09,456 INFO [Listener at localhost/44137] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 20:11:09,559 WARN [BP-1801566786-172.31.14.131-1690229463922 heartbeating to localhost/127.0.0.1:39673] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 20:11:09,559 WARN [BP-1801566786-172.31.14.131-1690229463922 heartbeating to localhost/127.0.0.1:39673] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1801566786-172.31.14.131-1690229463922 (Datanode Uuid 185844a4-f2a5-4d8b-920e-92c87ff812f7) service to localhost/127.0.0.1:39673 2023-07-24 20:11:09,559 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/cluster_90be22ae-8281-7d6a-0dfd-0edb5b842584/dfs/data/data3/current/BP-1801566786-172.31.14.131-1690229463922] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 20:11:09,560 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/cluster_90be22ae-8281-7d6a-0dfd-0edb5b842584/dfs/data/data4/current/BP-1801566786-172.31.14.131-1690229463922] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 20:11:09,561 WARN [Listener at localhost/44137] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 20:11:09,564 INFO [Listener at localhost/44137] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 20:11:09,666 WARN [BP-1801566786-172.31.14.131-1690229463922 heartbeating to localhost/127.0.0.1:39673] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 20:11:09,666 WARN [BP-1801566786-172.31.14.131-1690229463922 heartbeating to localhost/127.0.0.1:39673] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1801566786-172.31.14.131-1690229463922 (Datanode Uuid 2562a75e-9e92-4d13-a5a8-bb4c901f3fa2) service to localhost/127.0.0.1:39673 2023-07-24 20:11:09,667 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/cluster_90be22ae-8281-7d6a-0dfd-0edb5b842584/dfs/data/data1/current/BP-1801566786-172.31.14.131-1690229463922] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 20:11:09,667 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3f5ab657-2b22-d9ea-8fad-b64729a31b48/cluster_90be22ae-8281-7d6a-0dfd-0edb5b842584/dfs/data/data2/current/BP-1801566786-172.31.14.131-1690229463922] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 20:11:09,675 INFO [Listener at localhost/44137] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 20:11:09,789 INFO [Listener at localhost/44137] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-24 20:11:09,817 INFO [Listener at localhost/44137] hbase.HBaseTestingUtility(1293): Minicluster is down